bluelightning2k 2 days ago

I can't say I've ever wanted to transcribe code from an image. That seems super niche.

Perhaps the specific idea is to harvest coding textbooks as training data for LLMs?

  • cAtte_ a day ago

    Pieces is (correction: used to be, prior to the AI slopification) an app for storing code snippets. so i think you can imagine the general idea of, e.g., "cool API usage example from a YouTube video, let me screenshot it!"

  • eurekin 2 days ago

    I'm guessing to automatically scrape videos for future training rounds.

  • potato-peeler a day ago

    > can't say I've ever wanted to transcribe code from an image. That seems super niche.

    This is nightmare for endpoint protection. Imagine rogue employees snapping pics of your proprietary codebase and then using this to reassemble it.

  • blharr 2 days ago

    Eh, imagine poor documentation where people take screenshots of steps and don't write them out.

    I can also imagine plenty of YouTube tutorials that type the code live... seems fairly useful

camtarn 2 days ago

Neat article, but I feel like I have no idea why they're doing this! Is transcribing code from images really such a big use case?

  • SloopJon 2 days ago

    The product appears to be similar to Microsoft's embattled Recall feature. In order to remember your digital life it takes frequent screenshots.

  • FloatArtifact 2 days ago

    From an accessibility standpoint, yes. To be able to pattern match where you are in I.D.E without using an accessibility api

  • dewey 2 days ago

    > To best support software engineers when they want to transcribe code from images, we fine-tuned our pre-processing pipeline to screenshots of code in IDEs, terminals, and online resources like YouTube videos and blog posts.

    Even with these examples that seems like a very narrow use case.

  • EvanAnderson 2 days ago

    It worries me that stuff like that becoming easier will lead to wacky data pipelines being normalized (pulling display output off systems and "scraping" it to get data, of dubious quality, versus just building a proper interface). The kind of crowd that likes "low code" tools like MSFT's "Power Automate" is going to love to make Rube Goldberg nightmares out of tools like this.

    It fills me with a deep sadness that we created deterministic machines then, though laziness, exploit every opportunity to "contaminate" them with sloppy practices that make them produce output with the same fuzzy inaccuracy as human brains.

    Old man yells a neural networks take: We're entering a "The Machine Stops" era where nobody is going to know how to formulate basic algorithms.

    "We need to add some numbers. Let's point a camera at the input, OCR it, then feed it to an LLM that 'knows math'. Then we don't have to figure out an algorithm to add numbers."

    I wish compute "cost" more so people would be forced to actually make efficient use of hardware. Sadly, I think it'll take mass societal and infrastructure collapse for that to happen. Until it does, though, let the excess compute flow freely!

    • jocoda 2 days ago

      asimov - The feeling of power.

  • gosub100 2 days ago

    I guess it would be excellent to evade security monitors to take unauthorized copies of your employers codebase.

bobosha 2 days ago

has anyone tried feeding the admittedly noisy OCR-ed text -at a document level - to an LLM for making sense? Presumably some of the less capable ones should be quite affordable and accurate at scale as well.

lesuorac 2 days ago

OCR is the biggest XY problem.

Stop accepting PDFs and force things to use APIs ...

MoonGhost a day ago

Even small upscale model trained on texts should do better than big generic.

abc-1 2 days ago

Anything that mentions tesseract is about 10 years out of date at this point.

  • fxtentacle 2 days ago

    Quite simply, you’re completely wrong. Modern tesseract versions include a modern LSTM AI. It can very affordably be deployed on CPU, yet its performance is competitive with much more expensive large GPU-based models. Especially if you handle a high volume of scans, chances are that tesseract will have the best bang per buck.

    • ianhawes 2 days ago

      My company probably spent close to 6 figures overall creating Tesseract 5 custom models for various languages. Surya beats them all and is open source (and quite faster).

      • booder1 a day ago

        Surya weights for the models are licensed cc-by-nc-sa-4.0. They have an exception for small companies. If you're company is not small you either need to pay them or use them illegally.

        Their training code and data is closed source. They are barely open weight and only inference is open source.

    • nicman23 2 days ago

      i remember that you could not train it your self in a font like you could in older versions, it that still the case?

  • booder1 2 days ago

    5.5.0 released November last year. Still a very active project as far as I can tell and runs on CPU. Even compared to best open source GPU option it is still pretty good. VLMs work very differently and don't work as well for everything. Why is it out of date?

    • cbsmith 2 days ago

      I don't know that that is true: https://researchify.io/blog/comparing-pytesseract-paddleocr-...

      Using Surya gets you significantly better results and makes almost all the work detailed in the article largely unnecessary.

      • booder1 a day ago

        Surya weights for the models are licensed cc-by-nc-sa-4.0 so not free for commercial usage. Also, as far as I know, the training data is 100% unavailable. Given they use well trained, but standard models, it isn't really open source and barely, maybe, open weight. I kinda hate how their repo says gpl cause that is only true for the inference code. The training code is closed source.

        • cbsmith 15 hours ago

          I did not know that the training code is closed source. That is troubling.

  • amelius 2 days ago

    Well, at least I can apt-get install tesseract.

    That doesn't hold for any of the GPU-based solutions, last time I checked.

  • krapht 2 days ago

    I just built a pipeline with tesseract last year. What's better that is open source and runnable locally?

    VLLM hallucination is a blocker for my use case.

    • criddell 2 days ago

      If you are stuck with open source, then your options are limited.

      Otherwise I'd say just use your operating system's OCR API. Both Windows and MacOS have excellent APIs for this.

    • stavros 2 days ago

      How is a hallucination worse than a Tesseract error?

      • krapht 2 days ago

        Because the VLM doesn't know it hallucinated. When you get a Tesseract error you can flag the OCR job for manual review.

      • gessha 2 days ago

        Latter is more likely to get debugged.

      • amelius 2 days ago

        It could hallucinate obscene language, something which is less likely with classic OCR.

      • jgalt212 2 days ago

        Hallucinations are hard to detect unless you are a subject-matter expert. I don't have direct experience with Tesseract error detection.

sushid 2 days ago

Making OCR more accurate for regular text (e.g. data extraction from documents) would be useful; not sure how useful code transcription is

vaxman 2 days ago

Tesseract OCR was created by digital (DEC) in 19_8_5 (yes, 40 not four YEARs ago). Now go back and read the article and ROFL with me.

  • ivanjermakov 2 days ago

    What is this argument? Much software we use today was created in the 80s.

    • vaxman a day ago

      Not the actual implementations heh ...I heard even Linus has dropped support for the 486. Even the infra is finally giving way...did you see the NVLINK SPINE announcement a few days ago? It's going to be deployed in Stargate UAE that was announced Thursday.

  • rafram 2 days ago

    Unix was created in _1971_ and here we are still running processes and shells like it’s the 70s. Why not just have an LLM dream up the output?

    • vaxman a day ago

      No son, Linux is not a version of Unix anymore than MINIX is.

      NeXTStep was real UNIX, but macOS is not.

      BTW, I was taught to program in C by one of the original core Unix team members and I worked for DEC long before I could have discussed TesseractOCR with people who didn't. Keep those ignorant downvotes commin'

  • Onavo 2 days ago

    The original tesseract OCR has no neural nets. It bare little resemblance to the modern version.

    • vaxman 2 days ago

      It's still 40.

      Why not use Ollama-OCR?

      • rafram 2 days ago

        I’ve tested a bunch of vision models on particularly difficult documents (handwritten in a German script that’s no longer used), and I have yet to be impressed. They’re good at BSing to the point that you almost think they nailed it, until you realize that it’s mostly/all made-up text that doesn’t appear in the document.

      • yjftsjthsd-h 2 days ago

        > It's still 40.

        Is it, though? If the important parts of the code are new, does it matter that other parts are older or derived from older code? (Of course, I think this whole line of thought is pointless; what matters is not age, but how well it works, and tesseract generally does seem to work.)

      • krapht 2 days ago

        Because I benchmarked both on my dataset and found that Tesseract was better for my use-case?