Joe Conley Tagged comcast Random thoughts on technology, books, golf, and everything else that interests me http://www.josephpconley.com/name/comcast PHLAI Me to the Moon <p><img src="/assets/phlai.jpg" /></p> <p>Hello friends! I was lucky enough last week to attend <a href="https://phlai.comcast.com/">PHLAI</a>, a Comcast-sponsored conference on machine learning and artificial intelligence. The dreary weather did not dampen our spirits as practitioners and business stakeholders met to discuss one of the most important trends in our lifetime.</p> <p>The talks ranged from high-level, entertaining overviews to deep-dive technical lectures. The discussions were very focused and targeted on pragmatic approaches to solving business problems using machine learning and AI, and it’s amazing to see how much progress is being made in a seemingly short amount of time.</p> <p>Here are a few takeaways.</p> <h2 id="the-importance-of-comprehension-of-models">The importance of comprehension of models</h2> <p>This topic sprung up everywhere. The ability to understand why a model predicts something has a great bearing on regulatory concerns, racial profiling, and security. We can’t make meaningful progress in AI without taking steps to make these models as explainable as possible. And it doesn’t even have to be something as explicit as opening the black box and producing a deterministic formula, we just need some insight as to why models predict the way they do.</p> <h2 id="pragmatic-approach">Pragmatic approach</h2> <p>I enjoyed the constant focus on simplicity and picking the right tool for the job. Why don’t you put down those neural nets and try a simple regression? Or maybe use specific models for specific tasks and (gasp) use imperative or brute force techniques for other tasks. I must have heard the old <a href="https://www.farnamstreetblog.com/2015/01/how-to-think-2/">hammer and nail adage</a> in at least three separate talks, which is great. I think most experienced software engineers have sat down their junior teammates and said the same quote. It’s important to be mindful of our own biases and think about what delivers value to your client/business stakeholder by using the simplest tool for the job.</p> <h2 id="spread-the-love">Spread the love</h2> <p>The final trend I noticed was the focus on distributing ML/AI thinking among several teams rather than having it centralized in one silo. This idea was backed up by studies that showed companies who took a distributed approach showed better sales/ROI numbers that companies who silo-ed their innovation efforts on isolated teams.</p> <p>From an investment perspective, I also appreciated <a href="http://opim.wharton.upenn.edu/~kartikh/">Kartik Hosanagar’s</a>’s thoughts on a balanced AI portfolio. His studies showed that focusing mostly on quick, iterative wins with a few longer-term projects led to positive ROI. I love how practical this idea is. Speaking in terms of dollars and cents resonates much more strongly with the business stakeholders and aligns these projects with the goals of the entire organization.</p> <h1 id="reflection">Reflection</h1> <p>I’ve been with Chariot Solutions for a few years now, and as such have had the opportunity to attend several conferences like this. Taking this time to think and reflect is essential in ALL fields, especially a field as fast-moving and relevant as artificial intelligence.<a href="http://lifehacker.com/5670380/the-power-of-time-off">Bill Gates</a> famously takes an annual “think week” to explore and reflect on big ideas. Conferences are even better, they give you a chance to talk to other people in the field (talking being still one of the most effective forms of information gathering).</p> <p>But what’s the point of these conferences if we just go back to our day jobs and carry on with business as usual? We need to find a way to <strong>actively</strong> engage with these ideas. That engagement could be different for everyone. For some it could mean creating a small project using a new AI framework. Or reading a book about a specific trend or application. Or writing a blog post to organize your thoughts and make an argument. Either way, I’d argue that what you do <strong>after</strong> the conference is just as important as what you do during the conference.</p> Tue, 22 Aug 2017 00:00:00 +0000 http://www.josephpconley.com/2017/08/22/phlai.html http://www.josephpconley.com/2017/08/22/phlai.html Real World Spark Lessons <p>I’ve enjoyed learning the ins and outs of <a href="https://spark.apache.org/">Spark</a> at my current client. I’ve got a nice base SBT project going where I use Scala to write the Spark job, <a href="https://github.com/typesafehub/config">Typesafe Config</a> to handle configuration, <a href="https://github.com/sbt/sbt-assembly">sbt-assembly</a> to build out my artifacts, and <a href="https://github.com/sbt/sbt-release">sbt-release</a> to cut releases. Using this as my foundation, I recently built a Spark job that runs every morning to collect the previous day’s data from a few different datasources, join some reference data, perform a few aggregations and write all of the results to Cassandra. All in roughly three minutes (not too shabby).</p> <p>Here’s some initial lessons learned:</p> <ul> <li>Be mindful of when to use <code class="highlighter-rouge">cache()</code>. It sets a checkpoint for your DAG so you don’t need to re-compute the same instructions. I ended up using this before performing my multiple aggregations.</li> <li><a href="https://avro.apache.org/">Apache Avro</a> is really really good at data serialization. Should be the default choice for large-scale data writing into HDFS.</li> <li>When using <code class="highlighter-rouge">pivot(column, range)</code>, it REALLY helps if you can enumerate the entire range of the pivot column values. My job time was cut in half as a result of passing all possible values. More here on <a href="https://databricks.com/blog/2016/02/09/reshaping-data-with-pivot-in-apache-spark.html">the Databricks blog</a></li> <li>Cassandra does upserting by default, so I didn’t even need to worry about primary key constraints if data needs to be re-run (idempotency is badass).</li> </ul> <p>Recently, I was asked to update my job to run every 15 minutes to grab the latest 15 minutes of data (people always want more of a good thing). So I somewhat mindlessly updated my cronjob and didn’t re-tune any of the configuration parameters (spoiler alert: bad idea). Everything looked good locally and on our test cluster, but when it came time for production, WHAM! My job was now taking 5-7 minutes when running on a fraction of the data for the daily runs. Panic time!</p> <p><img src="/assets/fry-panic.jpg" alt="Philip J. Fry Panicking" /><br /></p> <p>After wading through my own logs and some cryptic YARN stacktraces, it dawned on me to check my configuration properties. One thing in particular jumped out at me:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>spark.sql.shuffle.partitions = 2048 </code></pre></div></div> <p>I had been advised to set this value when running my job in production. And it worked well for my daily job (cutting down on processing time by 30s). However, now that I was working with data in a 15-minute time window, this was WAY too many partitions. The additional runtime resulted from the overhead of using so many partitions for so little data (my own theory, correct me if I’m wrong). So I disabled this property (defaulting to 200) and my job started running in ~2 minutes, much better!</p> <p><img src="/assets/futurama-happy.jpg" alt="Futurama gang happy" /><br /></p> <p>(UPDATE: after some experimentation on the cluster, I set the number of partitions to 64)</p> <p>More lessons learned:</p> <ul> <li>ALWAYS test your Spark job on a production-like cluster as soon as you make any changes. Running your job locally vs. running your job on a YARN/Mesos cluster is about as similar as running them on Earth vs. Mars, give or take.</li> <li>You REALLY should know the memory/cpu stats of your cluster to help inform your configuration choices. You should also be mindful of what other jobs run on the cluster and when.</li> <li>Develop at least a basic ability to <a href="https://databricks.com/blog/2015/06/22/understanding-your-spark-application-through-visualization.html">read and understand the Spark UI</a>.<br /> It’s got a lot of useful info, and with event logging you can see the improvements of your incremental changes in real-time.</li> </ul> <p>Let me give another shout-out to Typesafe Config again for making my life easier. I have three different ways (env variables, properties file, command line args) to pass configuration to my Spark job and I was able to quickly tune parameters using all of these options. Interfaces are just as important to developers as they are to end users!</p> <p>All in all this was a fun learning experience. I try to keep up on different blogs about Spark but you really don’t get a good feel for it until you’re actually working on a problem with production-scale infrastructure and data. I think this is a good lesson for any knowledge work. You need to <a href="https://www.farnamstreetblog.com/2013/04/the-work-required-to-have-an-opinion/">do the work</a> to acquire knowledge. This involves not just reading but challenging assumptions, proving out ideas, and <a href="http://www.nytimes.com/1997/07/27/sports/hogan-constant-focus-on-perfection.html?src=pm">digging knowledge out of the dirt</a>. Active engagement using quick feedback loops will lead to much deeper and usable knowledge, and that’ll make you, as Mick would say, <a href="https://www.youtube.com/watch?v=o0CXUv-xxtY">“a very dangerous person!”</a></p> <p>Party on!</p> <p><img src="https://media.giphy.com/media/vMnuZGHJfFSTe/giphy.gif" alt="Wayne Zang" /><br /></p> Wed, 31 May 2017 00:00:00 +0000 http://www.josephpconley.com/2017/05/31/real-world-spark-lessons.html http://www.josephpconley.com/2017/05/31/real-world-spark-lessons.html Graph-Based Documentation <p>Has anyone ever met a documentation system they both <em>liked</em> and found <em>useful</em>? I love Evernote as much as the next guy but the simple list view has its limitations. Most wikis present information in a tree view where pages are restricted to a parent-child relationship. Neither are very useful or intuitive for documenting complex systems!</p> <p><img src="https://confluence.atlassian.com/download/attachments/218270144/Confluence%20Tree%20View%20Web%20Part.PNG?version=1&amp;modificationDate=1192642298936&amp;api=v2" /><br /></p> <p>I’m a very visual thinker. I know from experience that when dealing with several layers of abstraction, having a good visualization can be very helpful. And when I say <strong>good</strong>, I mean <strong>good</strong> in the sense that the visualization is <strong>as close to reality as possible</strong>. Shane Parrish and others remind us that <a href="https://www.farnamstreetblog.com/2015/11/map-and-territory/">the map is not the territory</a>, but we can get pretty damn close. And I think graphs can help (<a href="https://neo4j.com/blog/technical-documentation-graph/">so does neo4j, shockingly</a>). Because it’s 2017, and we deserve more optimal ways to visualize ideas and systems.</p> <p>Why graphs? Graphs are inherently simple. There are nodes and edges. That’s it. Nodes represent a “thing”, edges represent a “relationship between things”. There’s no parent-child restriction; any node can be related to any other node. Visually, the relationships can be shown compactly and the information structure is more flexible. Using this as our foundation, we can start to build something useful.</p> <p>So, here’s what I’ve got so far. I’m calling it <em>Episteme</em> (from the Greek for <a href="https://en.wikipedia.org/wiki/Episteme">“knowledge, science, or understanding”</a>). It’s a desktop app powered by <a href="https://electron.atom.io/">Electron</a> and is a simple graph of nodes and edges where nodes are some entity we want to document. Here’s an example based on my current <a href="http://www.swingstats.com/about">SwingStats</a> architecture:</p> <p><img src="/assets/episteme-graph.png" alt="Episteme Graph" /><br /></p> <p>Here the nodes represent backend services, webapps, datasources, and APIs while the edges connect the nodes that interact in some way. Clicking on each node will bring up a Markdown-based document which will autosave on edit:</p> <p><img src="/assets/episteme-node.png" alt="Episteme Node" /><br /></p> <p>I’ve already been using it at my current client to help me navigate the dozens of systems and their interactions. I’ve found the most value in quickly accessing common commands (SQL, ssh, docker, etc.), environment information and links. It’s definitely sped up development time as I don’t need to constantly search Confluence or Google for rote memory stuff like query syntax. It just feels like the information is much closer at hand.</p> <p>I’m hoping to add some functionality to make it more context-driven. I think tagging nodes would serve well here, depending on what project/context I’m working on I could filter the graph by tags to only show relevant nodes. As the graph grows, having a “Jump To” button for nodes would be nice. Full-text search is probably inevitable too.</p> <p>Another interesting extension would be having teams share and collaborate on the graph. Maybe in a Git-based system with a fork/clone model so you get version control for free and can see how the graph evolves over time? Throw in some live documentation a la <a href="http://swagger.io/">Swagger</a> and baby you’ve got a stew going!</p> <iframe width="560" height="315" src="https://www.youtube.com/embed/Sr2PlqXw03Y" frameborder="0" allowfullscreen=""></iframe> <p><br /></p> <p>One cool thing to note is it took me five hours to get a useful protoype working, and most of that time was spent learning Electron.<br /> I’ve spent a few more hours on refinements but <a href="http://visjs.org/">vis.js</a> and <a href="https://simplemde.com/">SimpleMDE</a> do all the heavy lifting, and the graph is persisted as a simple JSON file for now. And I’m not a master front-end developer by any stretch of the imagination so if you have an idea, find some good tools that get you most of the way there and kick the tires!</p> <p>Interested in this stuff? Wanna see a hosted version so you can take it for a spin? Wanna help me finish building the damn thing? Let me know in the comments below or on Twitter <a href="https://www.twitter.com/josephpconley">@josephpconley</a>. And thanks to those five brave individuals who voted in my <a href="https://twitter.com/josephpconley/status/852576703419478016">Twitter poll</a>, your feedback is much appreciated!</p> Wed, 26 Apr 2017 00:00:00 +0000 http://www.josephpconley.com/2017/04/26/graph-based-documentation.html http://www.josephpconley.com/2017/04/26/graph-based-documentation.html Scala By The Schuylkill Recap <p>This past Tuesday I had the pleasure of attending the <a href="http://scala.comcast.com/">Scala by the Schuylkill conference</a> at Comcast headquarters in downtown Philadelphia. Initially begun as an internal Scala conference, the organizers opened the conference this year to external folks interested in Scala. I learned a lot from this event, gaining perspective on trends in the Scala community and sparking curiosity in several interesting applications of the Scala language.</p> <blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Our <a href="https://twitter.com/hashtag/ScalaByTheSchuylkill?src=hash">#ScalaByTheSchuylkill</a> organizers with keynote speaker <a href="https://twitter.com/sreekotay">@sreekotay</a>! <a href="https://twitter.com/hashtag/onbreak?src=hash">#onbreak</a> <a href="https://twitter.com/hashtag/scala?src=hash">#scala</a> <a href="https://t.co/yyJoTfkljm">pic.twitter.com/yyJoTfkljm</a></p>&mdash; Comcast Careers (@comcastcareers) <a href="https://twitter.com/comcastcareers/status/823903924394610694">January 24, 2017</a></blockquote> <script async="" src="//platform.twitter.com/widgets.js" charset="utf-8"></script> <p><br /></p> <p>The keynote speeches were the highlight of the conference for me. Comcast’s CTO, <a href="https://twitter.com/sreekotay">Sree Kotay</a>, gave an engaging talk on the culture of innovation at Comcast and how they’ve evolved into a “technology first” company (as quoted recently by their CEO Brian Roberts). He also explained their rationale for using Scala for certain projects, noting the interoperability with Java, modularity, and its ability to draw top talent as key factors of adoption. He even showed off his geek credentials by detailing his love/hate relationship with a certain Scala web service library. It’s clear that Sree is an engineer at heart and it was refreshing to see that the CTO of a multi-billion dollar company still enjoys tinkering with code.</p> <p><a href="https://twitter.com/mpilquist">Michael Pilquist</a> gave the other keynote, doing a masterful job in explaining the <a href="https://speakerdeck.com/mpilquist/realistic-functional-programming">value of functional programming</a>. He boiled down the essence of FP as managing the complexity of both state and control flow via composability and small expressions in isolation. He also demystified category theory, an area of mathematics I’ve always found interesting but never really saw the practical use for until now. He stressed that category theory in programming is used to achieve precision by finding the appropriate level of abstraction for a given problem to focus on the essential. Michael put these ideas in an accessible and interesting context, and I also appreciated his book recommendation, <a href="https://www.goodreads.com/book/show/23360039-how-to-bake-pi"><em>How to Bake Pi</em></a> by Eugenia Chang, which I’m currently devouring.</p> <p>A great variety of talks followed, touching on interesting topics like GIS, machine learning, microservices, and streaming with a focus on tools like Akka and Spark. About half of the speakers were from Comcast, and it was interesting to see the problems they’ve had to solve and why they chose Scala to solve them (hint: they work with data, a LOT of it). I came away with at least a dozen different TODOs to research new libraries or techniques. I also enjoyed meeting new people and catching up with some past colleagues. As an introvert, I don’t focus much on networking and relationship building, but a tech conference focused on a specific technology like Scala creates an environment that’s very conducive to meeting new people and learning about their work.</p> <p>I’m happy to see an important tech company like Comcast invest so much time and energy into both the Scala ecosystem and the local Scala community here in Philadelphia. It’s clear that, regardless of what you may have heard, Scala is here to stay!</p> <p>Special thanks to Chariot for sponsoring my attendance!</p> Fri, 27 Jan 2017 00:00:00 +0000 http://www.josephpconley.com/2017/01/27/scala-by-the-schuylkill.html http://www.josephpconley.com/2017/01/27/scala-by-the-schuylkill.html