Code Monkey home page Code Monkey logo

ollamac's Introduction

Hi, there! 👋🏼

I'm Kevin Hermawan, and below are some of the projects I've passionately crafted:

iOS

macOS

  • Canvas - DALL·E playground for the Mac
  • Ollamac - Mac app for Ollama

Web

Open Source Software (OSS)

For my open-source contributions, you can check them out here.

ollamac's People

Contributors

dcchambers avatar djmango avatar kevinhermawan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ollamac's Issues

2.0 crash

The program crashes at startup since the update

unwieldly input

Please consider changing the input field. It should be larger, with a scrollbar, and should not submit on any keypress but rather use a submit button.
i dont know if it is just me, but I can't use this for coding. as soon as I press enter or shift+enter it submits. I also cant see what I have written as the input grows. If I copy and paste multiline input, it flattens it into a single paragraph.

Question: Allowing Access to Files on Mac

Dragging a file to the prompt gives the file path but a prompt like "summarize: /Users/mymcbook/Desktop/example.pdf" gives an answer that it doesn't have access to file. I have given the app full disk access. Is this function not possible currently?

Remove SUEnableDownloaderService key in Info.plist

Due to a sandboxing issue new to macOS Sonoma, users may see a one-time warning when checking for updates about "Downloader" differs from previously opened versions.

Ollamac can simply resolve this issue by removing the SUEnableDownloaderService key in the app's Info.plist. This app is already entitled with com.apple.security.network.client and does not have a reason to use Sparkle's downloader service.

See this discussion for more information.

First message fails when starting new chat

when you start a new chat (with any model), the first response will come back as either \n or the first token in the reponse, then if you re-prompt the response comes back correctly.
Screenshot 2023-12-09 at 11 54 04 AM

Regeneration seems to use incorrect context


Love the feature, but stumbled upon strange behavior, regenerates response seemed to be a bit out of the chat's context.

At first I thought it looses context completely, but after a bit of experimentation I thing it might be using previous generations of the response.

DecodingError when Ollama is not on the same host

Hi together 👋🏻

First of all thanks for making this nice Ollama client. I love the clean interface!

Since at the moment it is not possible to set another Ollama server than localhost (and my Ollama server is on another host), I set up a SSH tunnel to open the port 11434 on my localhost and tunnel to the same on the Ollama server.

It kind of works, but only for a short while. The selection of the model is always fine, but the chat will stop with a...

Response could not be decoded because of error:
The data couldn't be read because it isn't in the correct format.

... after some chunks of the answer were presented.

So I downloaded the XCode project and set some breakpoints.
I was able to get the error and this is the problem:

 Printing description of result.failure.responseSerializationFailed.reason.decodingFailed.error:
▿ DecodingError
  ▿ dataCorrupted : Context
    - codingPath : 0 elements
    - debugDescription : "The given data was not valid JSON."
    ▿ underlyingError : Optional<Error>
	- some : Error Domain=NSCocoaErrorDomain Code=3840 "Unexpected character '{' after top-level value around line 2, column 1." UserInfo={NSJSONSerializationErrorIndex=103, NSDebugDescription=Unexpected character '{' after top-level value around line 2, column 1.}

A bit strange, since a SSH tunnel should not change the JSON of an HTTP response.

Anyhow, since you refrained from making the Ollama server configurable until now (and this should be a super easy thing to do), may it be that you also faced similar problems?

I read here...
https://stephencowchau.medium.com/ollama-context-at-generate-api-output-what-are-those-numbers-b8cbff140d95
... that Ollama is opening additional ports, which are not port 11434.
I wonder if this could play a role.
Anyhow I think it shouldn't, since they should not be relevant for the communication between the client and the server.

Any thoughts?

Thanks a bunch in advance and keep up the great work!

Best regards
Stephan

Previously deleted models still show in model select

Previously deleted models still show in model select, when you select them it says: "This model is currently unavailable or has been removed" Strange you wouldn't use the model list feature on load to get all the available models, no reason to cache them.

No new chats available

I have Ollama running on my mac but Ollamac does not show any chat and New chat is greyed out. How should I open a new chat?

2023-11-09_13-04-11

External server

I like the interface and colours, but I can't run Ollama efficiently on my Mac.
I do have it running on a linux box with a 3090 on my LAN though!
Can you please add a setting to choose between localhost and a LAN ip for the server connection like OpenWebUI?

Siri / Dictation support

I'd love the ability to accept text via text-to-speech and output responses to Siri or Shortcuts. Not sure how to do it myself or I'd just fork, but will look into learning the skills.

User configurable baseURL

I need to connect to Ollamac API running on remote server, it seems that baseURL is hard coded in current version

var baseURL: URL {
    URL(string: "http://localhost:11434")!
}

Would it be possible to make this user configurable?
HTTPS (TLS) should also be supported for remote API.

Simpler new-chat experience (no pop-up prompt)

The pop-up prompt asking for a chat name and model is sort of unnecessary.

It'd be simpler to get right to the chat UI, where the chat name can be optionally edited, and the model is pre-selected.

In my case, there's only one model so that's simple. Otherwise you could default to the last-used model and allow changing before the first message with a drop-down. Maybe you could define a default?

If the user doesn't update the name after the first message or so, use that model with the contents to come up with a name like the chat sites do.

Great app, thanks for considering!

Text size

Text is unreasonably small and you can't make it bigger with Cmd+"+". Would be really nice to have an alternative to changing screen resolution, as not even making text bigger from System Settings affects the app.

Thank you! and sad to see a commercial copy https://ollamac.com

Hey @kevinhermawan ! Thank you for this native Mac app! I just wanted to say thanks! I've built native MacOS apps before and I appreciate the effort that goes into it.

I just wanted to let you know that I think someone else made some tweaks and launched a commercial version. https://ollamac.com

They're even using your OllamaKit which is fine but to even just keep the name is a bit disingenuous on their part IMHO.

Here's the apps' GitHub issues page https://github.com/gregorym/ollamac-pro

It's a bit disheartening. I know the LICENSE currently is MIT, so they're definitely free to do so. But this feels a bit off. They even kept your name.

I just wanted to bring this to your attention and say thanks.

[Feature Request] Syntax Highlighting

Just tried out Ollamac for the first time. Wow, is it smooth! I'll never use a browser UI again! Having said that, it would be great to see some syntax highlighting for code blocks, if that's possible.

Cancel generation

I think it would be very useful to be able to stop the model from generating output using a button in the chat interface.
Currently I have to wait until the model is done with generating output, even when I already know the generated output is not of interest to me, costing valuable compute time.

Auto-Generate Titles

Open-WebUI does this well: Option to select a model to auto-generate conversation titles. Would be even better to select after how many messages to do so -- Open-WebUI does this after the first message (as it seems) and the titles are often nonsensical if the first message is "hello" or some useless start.

Scrolling while generation is in progress

The problem:
It's hard to read a response while the app automatically scrolls to the last line in the response.

Can you please look into making user scrolling possible while generation is in progress?

Previously Deleted Chat Re-Appears When Beginning New Chat

To recreate the issue:

  • Create a chat and submit a message or two,
  • Delete the chat,
  • Start a new chat.

Right after creating a new chat, the previous chat window briefly flashes on screen before the new chat is loaded.

Does this mean the chat that was deleted wasn't actually deleted, or was the previous chat window still in memory and not properly dumped?

[Bug] Switching chats during generation

While messing around with the app I noticed that I could switch between chats while it was generating. However, upon switching chats, the generation continued in the active chat. It just combines the generated text with the last response.

Let me know if any clarity is needed about this.

Crash on open

updated to v2 and now it crashes when I open it. Ran the crash report through Llama 3 and got this

Based on the information provided, it appears that the issue is related to
a breakpoint being hit in the Ollamac app. Specifically:

  • The crashed thread is Thread 0, which is running on the main dispatch
    queue (com.apple.main-thread).
  • The exception type is EXC_BREAKPOINT, which indicates that a
    breakpoint was hit.
  • The terminating process is exc handler [3314], which suggests that the
    app's runtime (e.g., Swift) attempted to handle the breakpoint, but
    ultimately terminated due to the error.

Roadmap - Ollamac v2

Hello everyone, thank you for using Ollamac! I'm excited to announce that I will start developing Ollamac v2 in January 2024, with a goal to complete it by February March 2024.

Here are the features I'm planning to bring to Ollamac v2:

  • Separate modes for completion and chat.
  • More customization options, including host settings, system prompts, and more.
  • Support for multimodal models.
  • The highly anticipated addition of syntax highlighting.
  • Potential compatibility with iOS, although this depends on the feasibility of using an external server during Apple's review process.

The reason for launching v2 is that I'm planning a complete rewrite of Ollamac, which might affect current functionalities.

I will keep you updated on my progress via X (Twitter). If you have any questions or suggestions for new features, please feel free to join the discussion below or reach out to me directly on X (Twitter) 😃

Homebrew

It would be useful, if this application could be downloaded and updated via Homebrew for Mac.

Time out on initial message to a large LLM

When running a large LLM like llama3-70B locally, the first message to it often times out due to the time it takes ollama to load the model the very first time. Trying again will work since the model will already be loaded or 'hot'.

This is a problem with other UIs I've used too. There needs to be a way for this thing to tell the difference between a long load time and a problem with ollama.

"Send" button keyboard shortcut

Hi, thanks for the nifty mac app!
Is there a keyboard shortcut for the send button?
If not then maybe cmd+enter is a good candidate :)

Thanks in advance for considering this!

Feature Request: Implement Copy-Paste Functionality for Ollamac similar to ChatGPT

Ollamac is an excellent tool for generating creative and informative content. However, users often find themselves wanting to replicate the seamless copy-paste experience offered by ChatGPT. Currently, users have to manually copy the generated text from Ollamac and paste it into their desired application. This process interrupts workflow and reduces efficiency.

Proposed Solution
Implementing a built-in copy-paste functionality within Ollamac would significantly enhance user experience. Users should be able to generate text and seamlessly copy it to their clipboard with a single click, similar to the functionality provided by ChatGPT.

Using Ollamac with models stored on external HD

First off, great software!
I started running out of HD space so relocated my models to an external drive...Was able to get Ollama to work in terminal by following the documentation (re: server environment variables) but I cannot see how I can set Ollamac to see my model location by default? Any help would be appreciated

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.