【转帖】You can now run a GPT-3-level AI model on your laptop, phone, and Raspberry Pi

you,can,now,run,gpt,level,ai,model,on,your,laptop,phone,and,raspberry,pi · 浏览次数 : 0

小编点评

## Summary of the article: The article describes the recent development of a new large language model (LLM) called **LLama**. **Key points:** * LLaMA is available in different sizes (7B to 65B) and can run on various hardware, including M1 Macs, Raspberry Pi, and Pixel 6 phones. * It's significantly faster and more efficient than OpenAI's GPT-3, which is designed to be censorship-resistant. * The model's weights are now publicly available and can be used to create LLaMA variants with different sizes. * This opens up new possibilities for AI research and development, as it allows for more efficient and scalable LLM training. * While challenges remain, with fine-tuning and optimization, LLaMA's performance is expected to improve further. **Overall, the article highlights the rapid pace of AI development and the potential impact of LLaMA on the future of AI technology.**

正文

 

https://arstechnica.com/information-technology/2023/03/you-can-now-run-a-gpt-3-level-ai-model-on-your-laptop-phone-and-raspberry-pi/

 

Things are moving at lightning speed in AI Land. On Friday, a software developer named Georgi Gerganov created a tool called "llama.cpp" that can run Meta's new GPT-3-class AI large language model, LLaMA, locally on a Mac laptop. Soon thereafter, people worked out how to run LLaMA on Windows as well. Then someone showed it running on a Pixel 6 phone, and next came a Raspberry Pi (albeit running very slowly).

If this keeps up, we may be looking at a pocket-sized ChatGPT competitor before we know it.

 
Enter your email to get the Ars Technica newsletter
Join Ars Technica and
Get Our Best Tech Stories
DELIVERED STRAIGHT TO YOUR INBOX.
By signing up, you agree to our user agreement (including the class action waiver and arbitration provisions), our privacy policy and cookie statement, and to receive marketing and account-related emails from Ars Technica. You can unsubscribe at any time.

But let's back up a minute, because we're not quite there yet. (At least not today—as in literally today, March 13, 2023.) But what will arrive next week, no one knows.

Since ChatGPT launched, some people have been frustrated by the AI model's built-in limits that prevent it from discussing topics that OpenAI has deemed sensitive. Thus began the dream—in some quarters—of an open source large language model (LLM) that anyone could run locally without censorship and without paying API fees to OpenAI.

Open source solutions do exist (such as GPT-J), but they require a lot of GPU RAM and storage space. Other open source alternatives could not boast GPT-3-level performance on readily available consumer-level hardware.

Enter LLaMA, an LLM available in parameter sizes ranging from 7B to 65B (that's "B" as in "billion parameters," which are floating point numbers stored in matrices that represent what the model "knows"). LLaMA made a heady claim: that its smaller-sized models could match OpenAI's GPT-3, the foundational model that powers ChatGPT, in the quality and speed of its output. There was just one problem—Meta released the LLaMA code open source, but it held back the "weights" (the trained "knowledge" stored in a neural network) for qualified researchers only.

Advertisement

Flying at the speed of LLaMA

Meta's restrictions on LLaMA didn't last long, because on March 2, someone leaked the LLaMA weights on BitTorrent. Since then, there has been an explosion of development surrounding LLaMA. Independent AI researcher Simon Willison has compared this situation to the release of Stable Diffusion, an open source image synthesis model that launched last August. Here's what he wrote in a post on his blog:

It feels to me like that Stable Diffusion moment back in August kick-started the entire new wave of interest in generative AI—which was then pushed into over-drive by the release of ChatGPT at the end of November.

That Stable Diffusion moment is happening again right now, for large language models—the technology behind ChatGPT itself. This morning I ran a GPT-3 class language model on my own personal laptop for the first time!

AI stuff was weird already. It’s about to get a whole lot weirder.

Typically, running GPT-3 requires several datacenter-class A100 GPUs (also, the weights for GPT-3 are not public), but LLaMA made waves because it could run on a single beefy consumer GPU. And now, with optimizations that reduce the model size using a technique called quantization, LLaMA can run on an M1 Mac or a lesser Nvidia consumer GPU (although "llama.cpp" only runs on CPU at the moment—which is impressive and surprising in its own way).

Things are moving so quickly that it's sometimes difficult to keep up with the latest developments. (Regarding AI's rate of progress, a fellow AI reporter told Ars, "It's like those videos of dogs where you upend a crate of tennis balls on them. [They] don't know where to chase first and get lost in the confusion.")

For example, here's a list of notable LLaMA-related events based on a timeline Willison laid out in a Hacker News comment:

  • February 24, 2023: Meta AI announces LLaMA.
  • March 2, 2023: Someone leaks the LLaMA models via BitTorrent.
  • March 10, 2023: Georgi Gerganov creates llama.cpp, which can run on an M1 Mac.
  • March 11, 2023: Artem Andreenko runs LLaMA 7B (slowly) on a Raspberry Pi 4, 4GB RAM, 10 sec/token.
  • March 12, 2023: LLaMA 7B running on NPX, a node.js execution tool.
  • March 13, 2023: Someone gets llama.cpp running on a Pixel 6 phone, also very slowly.
  • March 13, 2023, 2023: Stanford releases Alpaca 7B, an instruction-tuned version of LLaMA 7B that "behaves similarly to OpenAI's "text-davinci-003" but runs on much less powerful hardware.
Advertisement

After obtaining the LLaMA weights ourselves, we followed Willison's instructions and got the 7B parameter version running on an M1 Macbook Air, and it runs at a reasonable rate of speed. You call it as a script on the command line with a prompt, and LLaMA does its best to complete it in a reasonable way.

A screenshot of LLaMA 7B in action on a MacBook Air running llama.cpp.
Enlarge / A screenshot of LLaMA 7B in action on a MacBook Air running llama.cpp.
Benj Edwards / Ars Technica

There's still the question of how much the quantization affects the quality of the output. In our tests, LLaMA 7B trimmed down to 4-bit quantization was very impressive for running on a MacBook Air—but still not on par with what you might expect from ChatGPT. It's entirely possible that better prompting techniques might generate better results.

Also, optimizations and fine-tunings come quickly when everyone has their hands on the code and the weights—even though LLaMA is still saddled with some fairly restrictive terms of use. The release of Alpaca today by Stanford proves that fine tuning (additional training with a specific goal in mind) can improve performance, and it's still early days after LLaMA's release.

As of this writing, running LLaMA on a Mac remains a fairly technical exercise. You have to install Python and Xcode and be familiar with working on the command line. Willison has good step-by-step instructions for anyone who would like to attempt it. But that may soon change as developers continue to code away.

As for the implications of having this tech out in the wild—no one knows yet. While some worry about AI's impact as a tool for spam and misinformation, Willison says, "It’s not going to be un-invented, so I think our priority should be figuring out the most constructive possible ways to use it."

Right now, our only guarantee is that things will change rapidly.

与【转帖】You can now run a GPT-3-level AI model on your laptop, phone, and Raspberry Pi相似的内容:

【转帖】You can now run a GPT-3-level AI model on your laptop, phone, and Raspberry Pi

https://arstechnica.com/information-technology/2023/03/you-can-now-run-a-gpt-3-level-ai-model-on-your-laptop-phone-and-raspberry-pi/ Things are moving

[转帖]TiKV Config Learn the TiKV configuration file

The TiKV configuration file supports more options than command-line parameters. You can find the default configuration file in etc/config-template.tom

[转帖]PD Config Learn the PD configuration file

The PD configuration file supports more options than command-line parameters. You can find the default configuration file here. This document only des

[转帖]THE OSWATCHER ANALYZER USER'S GUIDE

oswbba THE OSWATCHER ANALYZER USER'S GUIDE Carl DavisMay 7, 2019 To see how to use this tool and it's different features you can view a series of shor

[转帖]tidb Modify Configuration Dynamically

https://docs.pingcap.com/tidb/v6.5/dynamic-config This document describes how to dynamically modify the cluster configuration. You can dynamically upd

[转帖]关于勤奋的英语名言

http://www.zajuzi.com/mingyandaquan/juzi423.html 1、永远不抱怨,一切靠自己。 Never complain, everything depends on yourself. 2、纵不能万丈光芒,也要倒在追梦的路上! Even if you can't

[转帖]关于Java:是否可以覆盖-XX + HeapDumpOnOutOfMemoryError生成的堆转储的文件权限?

https://www.codenong.com/12484559/ Can you override the file permissions for the heap dump produced by -XX+HeapDumpOnOutOfMemoryError? 在Linux上,使用-XX+H

[转帖]THE REST OF THE WORLD CAN FINALLY GET SAPPHIRE RAPIDS XEON SPS

https://www.nextplatform.com/2023/01/10/the-rest-of-the-world-can-finally-get-sapphire-rapids-xeon-sps/ If you are thinking that you are having flashb

[转帖]Thread Affinity

https://github.com/OpenHFT/Java-Thread-Affinity/releases Overview Lets you bind a thread to a given core, this can improve performance (this library w

[转帖]configure: error: cannot guess build type;you must specify one

该问题一般出现在国产平台,从错误描述来看,意思是:无法猜测build类型,你必须指定一个。 解决办法: 1. 在系统/usr路径下搜索 config.guess 和 config.sub 这两个文件。 2. 在当前编译工具目录下同样搜索 config.guess 和 config.sub 这两个文件