Note: this example is a slightly modified version of this example from ollama source code
Set up a virtual environment (optional):
python3 -m venv .venv
source .venv/bin/activate
Install the Python dependencies:
pip install -r requirements.txt
Pull the model you'd like to use:
ollama pull phind-codellama
Set the source directory as an external folder
export SOURCE_DIRECTORY='../project'
Or copy the files to this project
mkdir source_documents
rsync -av --delete --exclude '.*' --exclude 'node_modules/' ../YOUR_PROJECT source_documents
python3 ingest.py
Output should look like this:
Creating new vectorstore
Loading documents from source_documents
Loading new documents: 100%|████████████████████| 86/86 [00:01<00:00, 49.19it/s]
Loaded 86 new documents from source_documents
Split into 400 chunks of text (max. 500 tokens each)
Creating embeddings. May take some minutes...
Ingestion complete! You can now run privateGPT.py to query your documents
python3 privateGPT.py
Enter a query: Refactor ExternalDocumentationLink to accept an icon property and display it after the anchor text, replacing the icon that is already there
> Answer:
You can refactor the `ExternalDocumentationLink` component by modifying its props and JSX. First, update the prop types to include a new `icon` prop which will accept a ReactNode. Then, place the `{icon}` after the anchor text inside the component's JSX. Here's an example of how you could refactor this component:
```jsx
import React from 'react';
interface ExternalDocumentationLinkProps {
className?: string;
href: string;
label?: string;
icon?: React.ReactNode; // add this line
}
...
- Currently
.ts
and.tsx
are being loaded withTextLoader
as I coulnd't find any specific loader for Typescript files phind-codellama
needs 32 GB of RAM to run- This doesn't include an agent configuration