Joe Conley Tagged chariot Random thoughts on technology, books, golf, and everything else that interests me http://www.josephpconley.com/name/chariot Chariot Day 2017 Talk <p>This past weekend I had the pleasure of attending and speaking at Chariot Day 2017, an internal conference put on by my fellow Charioteers. This is the third I’ve attended, and I was amazed by the impressive breadth and depth of the talks. I feel very lucky to be working with people much smarter than me!</p> <p>I gave a talk on how to use Apache Spark and Apache Zeppelin to provide quick SQL-based visualizations for your personal finances. Here are the slides if you’re interested. I’ll be delving more into notebooks and developer productivity in a later post. I’m working on exposing the Zeppelin notebooks that were part of the demo as well, if you’re interested in running the notebooks against your own personal finance dataset let me know!</p> <iframe src="https://docs.google.com/presentation/d/e/2PACX-1vTAAzPyecRy2w1WEahPVilb7KMWEYMMxK03l-Q5qHYQgiDHEhwJhTgfw6rTyPW87XV3aj4XQOJYuIW0/embed?start=false&amp;loop=false&amp;delayms=3000" frameborder="0" width="600" height="366" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe> Mon, 09 Oct 2017 00:00:00 +0000 http://www.josephpconley.com/2017/10/09/chariot-day-2017-personal-finances.html http://www.josephpconley.com/2017/10/09/chariot-day-2017-personal-finances.html PHLAI Me to the Moon <p><img src="/assets/phlai.jpg" /></p> <p>Hello friends! I was lucky enough last week to attend <a href="https://phlai.comcast.com/">PHLAI</a>, a Comcast-sponsored conference on machine learning and artificial intelligence. The dreary weather did not dampen our spirits as practitioners and business stakeholders met to discuss one of the most important trends in our lifetime.</p> <p>The talks ranged from high-level, entertaining overviews to deep-dive technical lectures. The discussions were very focused and targeted on pragmatic approaches to solving business problems using machine learning and AI, and it’s amazing to see how much progress is being made in a seemingly short amount of time.</p> <p>Here are a few takeaways.</p> <h2 id="the-importance-of-comprehension-of-models">The importance of comprehension of models</h2> <p>This topic sprung up everywhere. The ability to understand why a model predicts something has a great bearing on regulatory concerns, racial profiling, and security. We can’t make meaningful progress in AI without taking steps to make these models as explainable as possible. And it doesn’t even have to be something as explicit as opening the black box and producing a deterministic formula, we just need some insight as to why models predict the way they do.</p> <h2 id="pragmatic-approach">Pragmatic approach</h2> <p>I enjoyed the constant focus on simplicity and picking the right tool for the job. Why don’t you put down those neural nets and try a simple regression? Or maybe use specific models for specific tasks and (gasp) use imperative or brute force techniques for other tasks. I must have heard the old <a href="https://www.farnamstreetblog.com/2015/01/how-to-think-2/">hammer and nail adage</a> in at least three separate talks, which is great. I think most experienced software engineers have sat down their junior teammates and said the same quote. It’s important to be mindful of our own biases and think about what delivers value to your client/business stakeholder by using the simplest tool for the job.</p> <h2 id="spread-the-love">Spread the love</h2> <p>The final trend I noticed was the focus on distributing ML/AI thinking among several teams rather than having it centralized in one silo. This idea was backed up by studies that showed companies who took a distributed approach showed better sales/ROI numbers that companies who silo-ed their innovation efforts on isolated teams.</p> <p>From an investment perspective, I also appreciated <a href="http://opim.wharton.upenn.edu/~kartikh/">Kartik Hosanagar’s</a>’s thoughts on a balanced AI portfolio. His studies showed that focusing mostly on quick, iterative wins with a few longer-term projects led to positive ROI. I love how practical this idea is. Speaking in terms of dollars and cents resonates much more strongly with the business stakeholders and aligns these projects with the goals of the entire organization.</p> <h1 id="reflection">Reflection</h1> <p>I’ve been with Chariot Solutions for a few years now, and as such have had the opportunity to attend several conferences like this. Taking this time to think and reflect is essential in ALL fields, especially a field as fast-moving and relevant as artificial intelligence.<a href="http://lifehacker.com/5670380/the-power-of-time-off">Bill Gates</a> famously takes an annual “think week” to explore and reflect on big ideas. Conferences are even better, they give you a chance to talk to other people in the field (talking being still one of the most effective forms of information gathering).</p> <p>But what’s the point of these conferences if we just go back to our day jobs and carry on with business as usual? We need to find a way to <strong>actively</strong> engage with these ideas. That engagement could be different for everyone. For some it could mean creating a small project using a new AI framework. Or reading a book about a specific trend or application. Or writing a blog post to organize your thoughts and make an argument. Either way, I’d argue that what you do <strong>after</strong> the conference is just as important as what you do during the conference.</p> Tue, 22 Aug 2017 00:00:00 +0000 http://www.josephpconley.com/2017/08/22/phlai.html http://www.josephpconley.com/2017/08/22/phlai.html The O'Reilly AI Conference in NY <p><img src="/assets/ainy.jpg" /><br /><small>Photo credit: <a href="https://conferences.oreilly.com/artificial-intelligence/ai-ny">O'Reilly AI</a></small></p> <p>I recently had the pleasure of attending the nascent <a href="https://conferences.oreilly.com/artificial-intelligence/ai-ny">O’Reilly AI Conference</a> in Midtown Manhattan. The event focused on the technical progress being made in deep learning, reinforcement learning, and cognitive systems that augment human intelligence. These advancements have already had a significant impact in many walks of life like autonomous driving, health care, and knowledge work. My impression from the conference was that while there’s been amazing gains in specific domains (i.e. narrow AI), there hasn’t been much focus yet on practical paths to developing fully-thinking, superintelligent systems (i.e. strong AI).</p> <h2 id="day-one---machines-as-thought-partners">Day One - Machines as Thought Partners</h2> <p>The talks I enjoyed the most on day one focused on building intelligent systems that work as “thought partners” with humans. David Ferrucci, the creator of IBM’s Watson and <a href="https://www.elementalcognition.com/">Elemental Cognition</a>, is creating intelligent systems which build a foundation of knowledge via dialogue with human counterparts. In this way, an intelligent system could learn much like a child does, asking questions and learning from experience. Whereas most predictive systems tend to rely on patterns in data, these systems would try to build actual knowledge that considers things like context, language, and even culture.</p> <iframe src="https://player.vimeo.com/video/190292710" width="640" height="360" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe> <p><br /></p> <p>Another talk that I really enjoyed was about advanced <a href="https://en.wikipedia.org/wiki/Natural_language_generation">Natural language generation (NLG)</a> by Kristian Hammond of <a href="https://narrativescience.com">Narrative Science</a>. He talked about intelligent systems as storytellers. Instead of presenting fancy visualizations for a data table in Excel, such a system could parse the table, do statistical analysis, and use NLG to tell you what’s interesting and important about the data. I love the efficiency in that! Developers spend so much time massaging, transforming, and visualizing data when really the endgame is to answer a few simple questions. Advances in NLG hold the promise of minimizing all this ceremony, freeing engineers up to solve more interesting problems and making us more productive.</p> <p>Ideas like these can really challenge one’s perspective on the future of work. Katy George of <a href="http://www.mckinsey.com/">McKinsey &amp; Co.</a> spoke about the impact of automation on jobs. She mentioned that very specific classes of jobs will probably be automated by AI soon, like collecting and organizing data (e.g. administrative/data entry) and predictable physical work (e.g. driving a truck). Interestingly, though, wages aren’t a strong indicator of what jobs can be automated. She mentioned landscaping as a low wage job that would be tough to automate, while high-wage lawyers and paralegals risk being replaced by <a href="https://www.ft.com/content/5d96dd72-83eb-11e6-8897-2359a58ac7a5">systems that do automated research and mine large datasets</a>.</p> <p>I think everyone needs to reflect on the future of work. I’ve been holding on to the belief that my job of software engineer was <em>very unlikely</em> to be replaced by a machine. <a href="https://www.bloomberg.com/graphics/2017-jobs-automation-risk/">A recent Bloomberg article</a> highlights a study from the University of Oxford predicting what jobs are at risk for automation, and I was surprised to find “Computer Programmer” roughly in the middle. While I’m still convinced that I won’t be replaced by AI in the short term, I can certainly envision a future where there’s less explicit code being written and more reliance on probabilistic models and more of the repeatable grunt work of programming is handled by AI.</p> <p><a href="http://www.businessinsider.com/robots-overtaking-american-jobs-2014-1" target="_blank"><img src="http://static1.businessinsider.com/image/52e2b6336bb3f7da630fd543-636-/sdfvfscreen-shot-2014-01-22-at-11.12.29-am.gif" /></a><br /></p> <h2 id="day-two---reinforcement-learning-systems">Day Two - Reinforcement Learning Systems</h2> <p>Day Two had some interesting talks on reinforcement learning, especially in the keynotes. <a href="https://people.eecs.berkeley.edu/~anca/">Anca Dragan from UC Berkeley</a> talked about the development of autonomous driving systems, and it was neat to see the iterations they went through to get a usable system. Their initial effort resulted in an overly defensive autonomous driver. When driving on a crowded California highway, the system would wait too long for a safe cushion to change lanes, and when other cars at a 4-way intersection never came to a full stop, it would confuse the AI and prevent it from moving. So after some tinkering, the system <em>itself</em> organically developed a more pragmatic strategy that merged defensive driving with a more collaborative approach that worked much better with live traffic.</p> <p>Another neat example was Libratus (Latin for “balanced”), a heads-up no-limit Texas Hold ‘Em bot with a three-pronged strategy to playing poker. It starts with computing a Nash Equilibrium based on the abstraction of the game (they use an abstraction to reduce the problem space). Then, during the later stages of the hand, it would employ an endgame solver to help analyze all possible permutations of play. Finally, it would analyze <em>its own</em> historical play to find its own weaknesses and improve on them. Consequently, Libratus <a href="http://www.pokerlistings.com/libratus-poker-ai-smokes-humans-for-1-76m-is-this-the-end-42839">beat the world’s best poker players handily</a>, earning over $1 million in the process. Though this might seem like a narrow application of AI, systems like Libratus could provide insight into other applications where imperfect information with one or more agents is relevant.</p> <iframe width="640" height="360" src="https://www.youtube.com/embed/jLXPGwJNLHk" frameborder="0" allowfullscreen=""></iframe> <p><br /></p> <p>Finally, the keynote given by Peter Norvig, one of the fathers of AI, stressed how AI could revolutionize software development. He spoke about a future where engineers were more like teachers than plumbers, instructing machines how to model certain processes at a higher level. In contrast, today’s software engineers are essentially micromanagers, writing every single instruction for the machine to handle. It’s refreshing to picture a world where coders could effectively build systems with more higher-level thinking but still have the confidence that the instructions will be interpreted and implemented without loss of meaning or control.</p> <iframe width="640" height="360" src="https://www.youtube.com/embed/mJHvE2JLN3Q" frameborder="0" allowfullscreen=""></iframe> <p><br /></p> <h2 id="reflection">Reflection</h2> <p>I was overwhelmed by the sheer impact that AI is already having in dozens of different fields. While the field of AI has gone through trends of popularity and decline in the past, it’s hard to ignore the current wealth of possibilities given the advent of cheap scalable computing power. My one hope for future conferences (which wasn’t adequately addressed at this one) is more discussion of how to build AI in a balanced, secure manner. The debate on the safety of AI <a href="https://www.recode.net/2017/7/25/16026184/mark-zuckerberg-artificial-intelligence-elon-musk-ai-argument-twitter">is currently raging</a>, with intelligent people on both sides of the issue. It’s important that well-meaning thinkers continue to debate this topic, because the capabilities of AI could very well grow exponentially beyond our control.</p> <p>I think what’s most encouraging to me (as a software engineer), is that since this field is still relatively new, we as engineers have the opportunity to help shape its direction. It’s a good reminder that technology isn’t inevitable, it gets built by people, so if you’re concerned about the direction of AI, get involved!</p> <h2 id="further-reading">Further Reading</h2> <ul> <li><a href="https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html">The AI Revolution: The Road to Superintelligence - Wait But Why</a> - Funny and accessible overview of AI and how it could evolve.</li> <li><a href="https://www.goodreads.com/book/show/20527133-superintelligence?from_search=true">Superintelligence: Paths, Dangers, Strategies by Nick Bostrom</a> - one of the more popular tomes on AI, this gives a thorough treatment of the history and context of AI. I’d hazard to say this is required reading if you’re interested in AI.</li> <li><a href="https://www.edx.org/course/artificial-intelligence-ai-columbiax-csmm-101x-0">AI Course on edX</a> - Good mix of theory and some hands-on work with Python</li> </ul> Fri, 28 Jul 2017 00:00:00 +0000 http://www.josephpconley.com/2017/07/28/oreilly-ai-conference.html http://www.josephpconley.com/2017/07/28/oreilly-ai-conference.html Real World Spark Lessons <p>I’ve enjoyed learning the ins and outs of <a href="https://spark.apache.org/">Spark</a> at my current client. I’ve got a nice base SBT project going where I use Scala to write the Spark job, <a href="https://github.com/typesafehub/config">Typesafe Config</a> to handle configuration, <a href="https://github.com/sbt/sbt-assembly">sbt-assembly</a> to build out my artifacts, and <a href="https://github.com/sbt/sbt-release">sbt-release</a> to cut releases. Using this as my foundation, I recently built a Spark job that runs every morning to collect the previous day’s data from a few different datasources, join some reference data, perform a few aggregations and write all of the results to Cassandra. All in roughly three minutes (not too shabby).</p> <p>Here’s some initial lessons learned:</p> <ul> <li>Be mindful of when to use <code class="highlighter-rouge">cache()</code>. It sets a checkpoint for your DAG so you don’t need to re-compute the same instructions. I ended up using this before performing my multiple aggregations.</li> <li><a href="https://avro.apache.org/">Apache Avro</a> is really really good at data serialization. Should be the default choice for large-scale data writing into HDFS.</li> <li>When using <code class="highlighter-rouge">pivot(column, range)</code>, it REALLY helps if you can enumerate the entire range of the pivot column values. My job time was cut in half as a result of passing all possible values. More here on <a href="https://databricks.com/blog/2016/02/09/reshaping-data-with-pivot-in-apache-spark.html">the Databricks blog</a></li> <li>Cassandra does upserting by default, so I didn’t even need to worry about primary key constraints if data needs to be re-run (idempotency is badass).</li> </ul> <p>Recently, I was asked to update my job to run every 15 minutes to grab the latest 15 minutes of data (people always want more of a good thing). So I somewhat mindlessly updated my cronjob and didn’t re-tune any of the configuration parameters (spoiler alert: bad idea). Everything looked good locally and on our test cluster, but when it came time for production, WHAM! My job was now taking 5-7 minutes when running on a fraction of the data for the daily runs. Panic time!</p> <p><img src="/assets/fry-panic.jpg" alt="Philip J. Fry Panicking" /><br /></p> <p>After wading through my own logs and some cryptic YARN stacktraces, it dawned on me to check my configuration properties. One thing in particular jumped out at me:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>spark.sql.shuffle.partitions = 2048 </code></pre></div></div> <p>I had been advised to set this value when running my job in production. And it worked well for my daily job (cutting down on processing time by 30s). However, now that I was working with data in a 15-minute time window, this was WAY too many partitions. The additional runtime resulted from the overhead of using so many partitions for so little data (my own theory, correct me if I’m wrong). So I disabled this property (defaulting to 200) and my job started running in ~2 minutes, much better!</p> <p><img src="/assets/futurama-happy.jpg" alt="Futurama gang happy" /><br /></p> <p>(UPDATE: after some experimentation on the cluster, I set the number of partitions to 64)</p> <p>More lessons learned:</p> <ul> <li>ALWAYS test your Spark job on a production-like cluster as soon as you make any changes. Running your job locally vs. running your job on a YARN/Mesos cluster is about as similar as running them on Earth vs. Mars, give or take.</li> <li>You REALLY should know the memory/cpu stats of your cluster to help inform your configuration choices. You should also be mindful of what other jobs run on the cluster and when.</li> <li>Develop at least a basic ability to <a href="https://databricks.com/blog/2015/06/22/understanding-your-spark-application-through-visualization.html">read and understand the Spark UI</a>.<br /> It’s got a lot of useful info, and with event logging you can see the improvements of your incremental changes in real-time.</li> </ul> <p>Let me give another shout-out to Typesafe Config again for making my life easier. I have three different ways (env variables, properties file, command line args) to pass configuration to my Spark job and I was able to quickly tune parameters using all of these options. Interfaces are just as important to developers as they are to end users!</p> <p>All in all this was a fun learning experience. I try to keep up on different blogs about Spark but you really don’t get a good feel for it until you’re actually working on a problem with production-scale infrastructure and data. I think this is a good lesson for any knowledge work. You need to <a href="https://www.farnamstreetblog.com/2013/04/the-work-required-to-have-an-opinion/">do the work</a> to acquire knowledge. This involves not just reading but challenging assumptions, proving out ideas, and <a href="http://www.nytimes.com/1997/07/27/sports/hogan-constant-focus-on-perfection.html?src=pm">digging knowledge out of the dirt</a>. Active engagement using quick feedback loops will lead to much deeper and usable knowledge, and that’ll make you, as Mick would say, <a href="https://www.youtube.com/watch?v=o0CXUv-xxtY">“a very dangerous person!”</a></p> <p>Party on!</p> <p><img src="https://media.giphy.com/media/vMnuZGHJfFSTe/giphy.gif" alt="Wayne Zang" /><br /></p> Wed, 31 May 2017 00:00:00 +0000 http://www.josephpconley.com/2017/05/31/real-world-spark-lessons.html http://www.josephpconley.com/2017/05/31/real-world-spark-lessons.html Graph-Based Documentation <p>Has anyone ever met a documentation system they both <em>liked</em> and found <em>useful</em>? I love Evernote as much as the next guy but the simple list view has its limitations. Most wikis present information in a tree view where pages are restricted to a parent-child relationship. Neither are very useful or intuitive for documenting complex systems!</p> <p><img src="https://confluence.atlassian.com/download/attachments/218270144/Confluence%20Tree%20View%20Web%20Part.PNG?version=1&amp;modificationDate=1192642298936&amp;api=v2" /><br /></p> <p>I’m a very visual thinker. I know from experience that when dealing with several layers of abstraction, having a good visualization can be very helpful. And when I say <strong>good</strong>, I mean <strong>good</strong> in the sense that the visualization is <strong>as close to reality as possible</strong>. Shane Parrish and others remind us that <a href="https://www.farnamstreetblog.com/2015/11/map-and-territory/">the map is not the territory</a>, but we can get pretty damn close. And I think graphs can help (<a href="https://neo4j.com/blog/technical-documentation-graph/">so does neo4j, shockingly</a>). Because it’s 2017, and we deserve more optimal ways to visualize ideas and systems.</p> <p>Why graphs? Graphs are inherently simple. There are nodes and edges. That’s it. Nodes represent a “thing”, edges represent a “relationship between things”. There’s no parent-child restriction; any node can be related to any other node. Visually, the relationships can be shown compactly and the information structure is more flexible. Using this as our foundation, we can start to build something useful.</p> <p>So, here’s what I’ve got so far. I’m calling it <em>Episteme</em> (from the Greek for <a href="https://en.wikipedia.org/wiki/Episteme">“knowledge, science, or understanding”</a>). It’s a desktop app powered by <a href="https://electron.atom.io/">Electron</a> and is a simple graph of nodes and edges where nodes are some entity we want to document. Here’s an example based on my current <a href="http://www.swingstats.com/about">SwingStats</a> architecture:</p> <p><img src="/assets/episteme-graph.png" alt="Episteme Graph" /><br /></p> <p>Here the nodes represent backend services, webapps, datasources, and APIs while the edges connect the nodes that interact in some way. Clicking on each node will bring up a Markdown-based document which will autosave on edit:</p> <p><img src="/assets/episteme-node.png" alt="Episteme Node" /><br /></p> <p>I’ve already been using it at my current client to help me navigate the dozens of systems and their interactions. I’ve found the most value in quickly accessing common commands (SQL, ssh, docker, etc.), environment information and links. It’s definitely sped up development time as I don’t need to constantly search Confluence or Google for rote memory stuff like query syntax. It just feels like the information is much closer at hand.</p> <p>I’m hoping to add some functionality to make it more context-driven. I think tagging nodes would serve well here, depending on what project/context I’m working on I could filter the graph by tags to only show relevant nodes. As the graph grows, having a “Jump To” button for nodes would be nice. Full-text search is probably inevitable too.</p> <p>Another interesting extension would be having teams share and collaborate on the graph. Maybe in a Git-based system with a fork/clone model so you get version control for free and can see how the graph evolves over time? Throw in some live documentation a la <a href="http://swagger.io/">Swagger</a> and baby you’ve got a stew going!</p> <iframe width="560" height="315" src="https://www.youtube.com/embed/Sr2PlqXw03Y" frameborder="0" allowfullscreen=""></iframe> <p><br /></p> <p>One cool thing to note is it took me five hours to get a useful protoype working, and most of that time was spent learning Electron.<br /> I’ve spent a few more hours on refinements but <a href="http://visjs.org/">vis.js</a> and <a href="https://simplemde.com/">SimpleMDE</a> do all the heavy lifting, and the graph is persisted as a simple JSON file for now. And I’m not a master front-end developer by any stretch of the imagination so if you have an idea, find some good tools that get you most of the way there and kick the tires!</p> <p>Interested in this stuff? Wanna see a hosted version so you can take it for a spin? Wanna help me finish building the damn thing? Let me know in the comments below or on Twitter <a href="https://www.twitter.com/josephpconley">@josephpconley</a>. And thanks to those five brave individuals who voted in my <a href="https://twitter.com/josephpconley/status/852576703419478016">Twitter poll</a>, your feedback is much appreciated!</p> Wed, 26 Apr 2017 00:00:00 +0000 http://www.josephpconley.com/2017/04/26/graph-based-documentation.html http://www.josephpconley.com/2017/04/26/graph-based-documentation.html Scala By The Schuylkill Recap <p>This past Tuesday I had the pleasure of attending the <a href="http://scala.comcast.com/">Scala by the Schuylkill conference</a> at Comcast headquarters in downtown Philadelphia. Initially begun as an internal Scala conference, the organizers opened the conference this year to external folks interested in Scala. I learned a lot from this event, gaining perspective on trends in the Scala community and sparking curiosity in several interesting applications of the Scala language.</p> <blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Our <a href="https://twitter.com/hashtag/ScalaByTheSchuylkill?src=hash">#ScalaByTheSchuylkill</a> organizers with keynote speaker <a href="https://twitter.com/sreekotay">@sreekotay</a>! <a href="https://twitter.com/hashtag/onbreak?src=hash">#onbreak</a> <a href="https://twitter.com/hashtag/scala?src=hash">#scala</a> <a href="https://t.co/yyJoTfkljm">pic.twitter.com/yyJoTfkljm</a></p>&mdash; Comcast Careers (@comcastcareers) <a href="https://twitter.com/comcastcareers/status/823903924394610694">January 24, 2017</a></blockquote> <script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script> <p><br /></p> <p>The keynote speeches were the highlight of the conference for me. Comcast’s CTO, <a href="https://twitter.com/sreekotay">Sree Kotay</a>, gave an engaging talk on the culture of innovation at Comcast and how they’ve evolved into a “technology first” company (as quoted recently by their CEO Brian Roberts). He also explained their rationale for using Scala for certain projects, noting the interoperability with Java, modularity, and its ability to draw top talent as key factors of adoption. He even showed off his geek credentials by detailing his love/hate relationship with a certain Scala web service library. It’s clear that Sree is an engineer at heart and it was refreshing to see that the CTO of a multi-billion dollar company still enjoys tinkering with code.</p> <p><a href="https://twitter.com/mpilquist">Michael Pilquist</a> gave the other keynote, doing a masterful job in explaining the <a href="https://speakerdeck.com/mpilquist/realistic-functional-programming">value of functional programming</a>. He boiled down the essence of FP as managing the complexity of both state and control flow via composability and small expressions in isolation. He also demystified category theory, an area of mathematics I’ve always found interesting but never really saw the practical use for until now. He stressed that category theory in programming is used to achieve precision by finding the appropriate level of abstraction for a given problem to focus on the essential. Michael put these ideas in an accessible and interesting context, and I also appreciated his book recommendation, <a href="https://www.goodreads.com/book/show/23360039-how-to-bake-pi"><em>How to Bake Pi</em></a> by Eugenia Chang, which I’m currently devouring.</p> <p>A great variety of talks followed, touching on interesting topics like GIS, machine learning, microservices, and streaming with a focus on tools like Akka and Spark. About half of the speakers were from Comcast, and it was interesting to see the problems they’ve had to solve and why they chose Scala to solve them (hint: they work with data, a LOT of it). I came away with at least a dozen different TODOs to research new libraries or techniques. I also enjoyed meeting new people and catching up with some past colleagues. As an introvert, I don’t focus much on networking and relationship building, but a tech conference focused on a specific technology like Scala creates an environment that’s very conducive to meeting new people and learning about their work.</p> <p>I’m happy to see an important tech company like Comcast invest so much time and energy into both the Scala ecosystem and the local Scala community here in Philadelphia. It’s clear that, regardless of what you may have heard, Scala is here to stay!</p> <p>Special thanks to Chariot for sponsoring my attendance!</p> Fri, 27 Jan 2017 00:00:00 +0000 http://www.josephpconley.com/2017/01/27/scala-by-the-schuylkill.html http://www.josephpconley.com/2017/01/27/scala-by-the-schuylkill.html Microservices and the Evolution of Software Architecture <p>As makers of enterprise software, we’ve come a long way. We’ve emerged from the shadows of command-line tools and Swing-based apps to build great and terrible web-based platforms, monolithic systems that inspire fear and awe in user and maintainer alike. Yet for all our cunning, we’re still imprisoned by our great works. Their slow builds, massive merge conflicts, and ever-increasing complexity slow the evolution of software (and, perhaps more importantly, developer). Yet this unrest gave birth to the idea of <a href="http://martinfowler.com/articles/microservices.html">microservices</a>, an idea which can help development teams move faster to create more robust and scalable software.</p> <h2 id="what-are-microservices">What are Microservices?</h2> <p>A now-ubiquitous term, microservices takes that age-old programming tenet of <a href="https://en.wikipedia.org/wiki/Unix_philosophy#Do_One_Thing_and_Do_It_Well">“do one thing, and do it well”</a> and applies it in a larger context to the application architecture as a whole. We’re no longer talking about encapsulating business logic in <a href="https://en.wikipedia.org/wiki/Single_responsibility_principle">small methods or classes</a>, but constraining the interactions of an entire context into an isolated, deployable, scalable unit. An application no longer lives solely in the confines of a WAR file on a Tomcat server, but exists as the composition of several services working in concert. The term “application” now seems quaint in contrast to the interactive platforms we can devise with microservices.</p> <p>This approach allows a team to build a service in isolation, choosing the appropriate language and datastore for that service’s needs. This also allows the teams themselves to be more specialized. Instead of having several full-stack developers who are pretty good at every aspect of the monolith, you can have entire teams to dedicated to just UI, server-side, or database development. This allows for higher quality of software, and a faster rate of evolution.</p> <script type="text/javascript" src="https://ssl.gstatic.com/trends_nrtr/884_RC03/embed_loader.js"></script> <script type="text/javascript"> trends.embed.renderExploreWidget("TIMESERIES", {"comparisonItem":[{"keyword":"microservices","geo":"","time":"2012-01-01 2016-12-01"}],"category":0,"property":""}, {"exploreQuery":"date=2012-01-01%202016-12-01&q=microservices"}); </script> <p><small>Popularity of the term “microservices” - <a href="https://www.google.com/trends/explore?date=2012-01-01%202016-12-01&amp;q=microservices" target="_blank">Google Trends</a></small></p> <p>Some recent trends have contributed to the popularity of microservices architectures. The rise of functional programming, with its focus on functions as first-class citizens and immutable state, encouraged developers to write smaller, simpler methods that compose rather than long, unwieldy algorithms. Reactive programming further refined inter-service communication with asynchronous message-passing, back-pressure, and resilience. Containerization provided another boost, literally encapsulating entire processes in portable, runnable environments. These ideas all lay the foundation for a microservices way of thinking.</p> <h2 id="the-end-goal-of-microservices">The End Goal of Microservices</h2> <p>So what great benefit, then, did we achieve from the microservices movement, other than some fancy new language to add to our resumes? David Dawson captured the idea best when he <a href="http://www.simplicityitself.io/microservices/2016/07/20/microservices-philosophy.html">described the end goal of microservices as achieving Antifragility</a>. This term was popularized by the stoic trader-turned-philosopher Nassim Nicholas Taleb in <a href="https://www.goodreads.com/book/show/13530973-antifragile?from_search=true"><em>Antifragile: Things That Gain from Disorder</em></a>. Taleb uses examples from all walks of life (financial options, political organization, physical training) to enumerate the tuple of &lt;Fragile, Robust, Antifragile&gt; that can be used to describe systems. A system under stress can either degrade (fragile), maintain normal operations (robust), or improve (antifragile). Antifragility, then, is the true aim for microservices, a system that can not just handle failure gracefully (robust) but also scale elastically to meet demand (antifragile). An ambitious goal, but one within reach given our current technology.</p> <p>Will this journey from the familiarity of the monolith to the unknown of the microservices be easy? Probably not. There are always tradeoffs when making such choices about technology. Developers will face increased complexity in running and testing multiple services. Architects will struggle to define appropriate boundaries for services. Operations folks will now have <em>n</em> builds to maintain and monitor instead of one. Change won’t come easy, and microservices may not be the best solution for every use case, but for most enterprise-level systems I think the microservices approach can lead to producing better software.</p> <h2 id="how-can-i-build-microservices">How Can I Build Microservices?</h2> <p>One such framework that is trying to make the implementation of microservices easier is <a href="http://www.lagomframework.com/">Lagom</a>. Lagom is a very opinionated framework. Given the pitfalls of microservices development, I think that’s a good thing. It’s designed from the bottom-up with microservices in mind, providing capabilities for service interaction, distributed persistence, and ease of development and deployment. You can read more about <a href="http://www.lagomframework.com/documentation/1.2.x/java/WhatIsLagom.html">their philosophy</a> and review a <a href="https://github.com/lagom/activator-lagom-java-chirper">sample application</a> to understand their motivation. In my next post, I’d like to use Lagom to answer one simple question: how do we build antifragile systems using microservices?</p> Thu, 05 Jan 2017 00:00:00 +0000 http://www.josephpconley.com/2017/01/05/antifragile-microservices.html http://www.josephpconley.com/2017/01/05/antifragile-microservices.html Help! My Monads are Nesting! <p>Do you build reactive applications using Scala? Then chances are you’ve had to deal with a <code class="highlighter-rouge">Future[Monad[T]]</code>, where <code class="highlighter-rouge">Monad</code> could be <code class="highlighter-rouge">Option</code>, <code class="highlighter-rouge">Either</code>, or something <a href="http://www.josephpconley.com/2016/07/18/an-ode-to-or.html">more wonderful</a> like <code class="highlighter-rouge">Or</code>. While these monads do nest as expected, the syntax and code flow can start to get pretty messy (motivating example below).</p> <p>Enter <a href="https://github.com/chariotsolutions/scala-commons#futureor">FutureOr</a>! This utility makes it super-simple to sequence several <code class="highlighter-rouge">Future[Or[T]]</code> calls. It’s also fairly easy to implement, so you could easily clone this and customize for your favorite nested monad combination.</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>//three service calls which return Future[Or[T]] and depend on the previous call trait IntService{ def callA: Future[Int Or One[ErrorMessage]] def callB(int a): Future[Int Or One[ErrorMessage]] def callC(int b): Future[Int Or One[ErrorMessage]] } val service: IntService = ... //without FutureOr, really ugly! I wouldn't wish this on my worst enemy! val result: Future[Int Or One[ErrorMessage]] = service.callA.flatMap{ a =&gt; a.flatMap{ case Good(goodA) =&gt; service.callB(goodA).flatMap { b =&gt; b.flatMap { case Good(goodB) =&gt; service.callC(goodB) case Bad(e) =&gt; Future.successful(Bad(e)) } } case Bad(e) =&gt; Future.successful(Bad(e)) } } //with FutureOr, so much better! val result: Future[Int Or One[ErrorMessage]] = (for { a &lt;- FutureOr(service.callA) b &lt;- FutureOr(service.callB(a)) c &lt;- FutureOr(service.callC(b) } yield c).future </code></pre></div></div> Thu, 17 Nov 2016 00:00:00 +0000 http://www.josephpconley.com/2016/11/17/future-or.html http://www.josephpconley.com/2016/11/17/future-or.html The Data Science Conference Recap <p>I recently attended the first annual <a href="http://www.thedatascienceconference.com">The Data Science Conference</a> in downtown Chicago. You can read about my experience on the <a href="http://chariotsolutions.com/who-we-are/life-at-chariot/post/2015-data-science-conference-recap/">Life at Chariot</a> blog. Thanks!</p> Mon, 23 Nov 2015 00:00:00 +0000 http://www.josephpconley.com/2015/11/23/the-data-science-conference.html http://www.josephpconley.com/2015/11/23/the-data-science-conference.html Chariot Day 2015 Recap <p>I recently had the pleasure of attending an internal tech conference at Chariot Solutions. You can read about my experience on the <a href="http://chariotsolutions.com/who-we-are/life-at-chariot/post/chariot-day-2015-recap-by-joe-conley/">Life at Chariot</a> blog. Thanks!</p> Wed, 27 May 2015 00:00:00 +0000 http://www.josephpconley.com/2015/05/27/chariot-day-recap.html http://www.josephpconley.com/2015/05/27/chariot-day-recap.html Philly ETE 2015 Recap <p>Last week I attended my first tech conference, the <a href="http://phillyemergingtech.com/">Emerging Technologies for the Enterprise Conference</a> in Philadelphia. I was able to sneak in at the last minute as a new member of <a href="http://chariotsolutions.com/">Chariot Solutions</a>, a company which thus far has proven to be an uncommon collection of intelligent individuals. Their conference set the bar high for future conferences as there was a wealth of interesting talks covering a great swath of subjects. It was also nice to reconnect with former coworkers and learn what new and exciting technologies they were using.</p> <p>The keynote speakers on both days gave excellent talks focusing on our relationship to technology. <a href="http://www.tigoe.net/blog/">Tom Igoe</a> focused on the impact of physical computing in our lives and closed with a very poignant example of how a son used physical computing to allow his father to continue playing guitar despite his decreased motor skills. <a href="https://twitter.com/pragdave">Dave Thomas</a> gave an insightful talk about the important of gaining tacit knowledge through experience and not being afraid to make mistakes as that’s how the best knowledge is found.</p> <p>My favorite talk was What is Rust? by <a href="https://twitter.com/wycats">Yehuda Katz</a>. Having been so immeresed in the JVM world, I was pleasantly surprised at the simplicity of the Rust language in handling memory management and mutability. I’ll definitely be building my next pet project in Rust. I also thoroughly enjoyed <a href="https://twitter.com/brixen">Brian Shirai’s</a> talk The End Of General Purpose Languages: Rubinius 3.0 And The Next 10 Million Programs. Brian was very thoughtful and challenged my basic assumptions and beliefs about programming. I’m now frantically scouring YouTube for more of his talks. All of these talks brought to mind a recent <a href="http://freakonomics.com/2014/11/27/is-americas-education-problem-really-just-a-teacher-problem-a-new-freakonomics-radio-podcast/">Freakonomics podcast about teachers</a> which notes that the best teachers “appeal to both the head and the heart”. The same goes for good tech talks as well.</p> <p>I was forutnate to attend this conference when I did. I was starting to feel “inspiration inertia” for the field of programming. I mean how many blog posts can you read about Reactive/Big Data/Microservices before you start to wonder, is there anything else going on in this field? One audience member at a talk hinted at this malaise, complaining about the “marketingspeak” that can dominate certain organizations. Attending this conference proved to me that, on the contrary, the field of computer science is rife with new and exciting advances, probably moreso than any other field (dentists or lawyers don’t deal with the rate of change that programmers do). It’s simply incumbent upon you as a technologist to constantly seek out new and exciting things (and not get bogged down by “<a href="http://en.wikipedia.org/wiki/Marchitecture">marchitecture</a>” as <a href="https://twitter.com/jamie_allen">Jamie Allen</a> put it).</p> <p>This leads me to my advice for future conference-goers: go to talks outside of your comfort zone. While I certainly was impressed by the JVM-based talks I attended, I didn’t learn as much, mostly because I tend to watch similar talks online anyway. The most interesting and thought-provoking talks were the ones where I knew little or nothing about the subject matter. Also, don’t be afraid to socialize with the speakers. The presenters I spoke with were very approachable and eager to delve deeper into their subject matter or talk about anything under the sun.</p> <p>The folks at Chariot did a phenomenal job with ETE. I’m now eagerly investigating which tech conference to attend this year (another perk of working for Chariot, they’ll send you to a conference once a year). If you have the means, I’d highly recommend checking out conferences like ETE on a regular basis.</p> Mon, 13 Apr 2015 00:00:00 +0000 http://www.josephpconley.com/2015/04/13/philly-ete-recap.html http://www.josephpconley.com/2015/04/13/philly-ete-recap.html