July 2024 Articles

An interesting note on C3.ai — they are expensing their software development costs instead of capitalizing them. From my basic understanding, typical accounting practices involve capitalizing software development expenses — spreading them as amortized costs over the software’s useful life — C3.ai is expensing these costs immediately on their financial statement as Selling, General & Administrative (SG&A) and Research & Development (R&D) expenses. This front-loading of expenses impacts their current profitability but is expected to enhance future financial performance by eliminating ongoing amortization costs. Consequently, once development costs are complete, C3.ai’s profit and loss and cash flow should show significant improvement. This strategic move offers insight into the company’s financial planning: they are willing to take an immediate hit to profitability for potential long-term gains.

After being asked to summarize a 50 page paper. A colleague took a stab at the summary by running it through ChatGPT. What Gerben found was interesting:

ChatGPT doesn’t summarise. When you ask ChatGPT to summarise this text, it instead shortens the text. And there is a fundamental difference between the two. To summarise, you need to understand what the paper is saying. To shorten text, not so much. To truly summarise, you need to be able to detect that from 40 sentences, 35 are leading up to the 36th, 4 follow it with some additional remarks, but it is that 36th that is essential for the summary and that without that 36th, the content is lost.

He concludes that LLM’s are influenced by how it was trained (it’s parameters) and the context you’ve given the LLM via the chat interface.

If the subject is well-represented by the parameters (there has been lots of it in the training material), the parameters dominate the summary more and the actual text you want to summarise influences the summary less. Hence: LLM Chatbots are pretty bad in making specific summaries of a subject that is widespread;… If the context is relatively small, it has little influence and the result is dominated by the parameters, so not by the text you are trying to summarise; If the context is large enough and the subject is not well-represented by the parameters (there hasn’t been much about it in the training material), the text you want to summarise dominates the result. But the mechanism you will see is ‘text shortening‘, not true summarising.

A solar powered “motorcycle” — three wheels — vehicle made by a young team in Michigan completed the Cannonball Run from New York’s Red Ball Garage to the Portofino Hotel in Redondo Beach in 13 days, 15 hours, and 19 minutes.

Love the idea of a solar car. Builds on what Aptera Motors here in San Diego has been trying to do.