Search Results

Search found 4009 results on 161 pages for 'protocol buffers'.

Page 9/161 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Protocol (or service publish/discovery) to detect devices in network

    - by Gobliins
    we connect some embedded devices in a network. What i am looking for now, is a way to find the devices IP and identify them. We work with Windows PC´s and i am about to write a C# tool that should do this. I thought about send a udp broadcast and in the ack i.e. is the device´s ip, which would mean the device needs a daemon runnig to assign an ip itself. Running a service (like a printer) on the device, and on the PC just lookup for the service. I read about some things like apipa, zeroconf, ipv4 local link, bonjour, dns-sd, mdns, bonjour; They can automatically assign ip´s and publish services in a network. My Question is, can someone recommend me what would be good for my task? -The protocol or Service should be low on ressource (memory/cpu usage) use. -Are there some standard protocolls to use? -Is DNS a good idea or would it be to ressource consumpting just for finding a device´s IP? -Should also work when no dhcp servers are around. edit: To clarify a bit: The IP configuration is automatic. The problem to focus is how to tell the PC which IP in the network (or a direct connection in this vase there would only be one) belongs to the device (identity).

    Read the article

  • Secure filesharing protocol for fileserver

    - by Hugo
    I'm setting up a fileserver, and I want lots of clients to easily access it. Up to now I've always used SSHFS to share between different PCs, but since I'm setting up a single fileserver, I'm looking for other common alternatives. Up to now I've seen: AFS: It seems it has no security, traffic is unencrypted, so it would require an SSH tunnel. If I'm to use SSH, I'd just use SSHFS. NFS: Same as above. Also, setting up the server is not so straighforward, it doesn't seem to be KISS enough - at least not for my liking. SMB: Same as AFS. It also seems not to be too well documented, and technically, seems a bit poor. It also seems the protocol isn't formally standardized. SSHFS has security, but as a downside, requieres every user to have an account on the server - there's no way to make a certain directory PUBLIC either. I don't think it has locking, and isn't very fault-tolerant. Are there any alternatives I've missing?

    Read the article

  • How to use HTTP Live Streaming protocol in iPhone SDk 3.0

    - by Pugal Devan
    Hi Guys, i have developed on IPhone application and submitted to App store. But my application got rejected based on below criteria. Thank you for submitting your yyyyyyyy application. We have reviewed your application and have determined that it cannot be posted to the App Store at this time because it is not using the HTTP Live Streaming protocol to broadcast streaming video. HTTP Live Streaming is required when streaming video feeds over the cellular network, in order to have an optimal user experience and utilize cellular best practices. This protocol automatically determines bandwidth available to users and adjusts the bandwidth appropriately, even as bandwidth streams change. This allows you the flexibility to have as many streams as you like, as long as 64 kbps is set as the baseline feed. In my apps i have to stream prerecorded m4v and mp3 files from my server. I used MPMoviePlayerController to stream and play those videos / audio. How to implement the HTTP Live Streaming Protocol in my apps? Also can i get some sample code? Thanks in advance!

    Read the article

  • Slowdowns when reading from an urlconnection's inputstream (even with byte[] and buffers)

    - by user342677
    Ok so after spending two days trying to figure out the problem, and reading about dizillion articles, i finally decided to man up and ask to for some advice(my first time here). Now to the issue at hand - I am writing a program which will parse api data from a game, namely battle logs. There will be A LOT of entries in the database(20+ million) and so the parsing speed for each battle log page matters quite a bit. The pages to be parsed look like this: http://api.erepublik.com/v1/feeds/battle_logs/10000/0. (see source code if using chrome, it doesnt display the page right). It has 1000 hit entries, followed by a little battle info(lastpage will have <1000 obviously). On average, a page contains 175000 characters, UTF-8 encoding, xml format(v 1.0). Program will run locally on a good PC, memory is virtually unlimited(so that creating byte[250000] is quite ok). The format never changes, which is quite convenient. Now, I started off as usual: //global vars,class declaration skipped public WebObject(String url_string, int connection_timeout, int read_timeout, boolean redirects_allowed, String user_agent) throws java.net.MalformedURLException, java.io.IOException { // Open a URL connection java.net.URL url = new java.net.URL(url_string); java.net.URLConnection uconn = url.openConnection(); if (!(uconn instanceof java.net.HttpURLConnection)) { throw new java.lang.IllegalArgumentException("URL protocol must be HTTP"); } conn = (java.net.HttpURLConnection) uconn; conn.setConnectTimeout(connection_timeout); conn.setReadTimeout(read_timeout); conn.setInstanceFollowRedirects(redirects_allowed); conn.setRequestProperty("User-agent", user_agent); } public void executeConnection() throws IOException { try { is = conn.getInputStream(); //global var l = conn.getContentLength(); //global var } catch (Exception e) { //handling code skipped } } //getContentStream and getLength methods which just return'is' and 'l' are skipped Here is where the fun part began. I ran some profiling (using System.currentTimeMillis()) to find out what takes long ,and what doesnt. The call to this method takes only 200ms on avg public InputStream getWebPageAsStream(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; WebObject wobj = new WebObject(url, 10000, 10000, true, "Mozilla/5.0 " + "(Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729)"); wobj.executeConnection(); l = wobj.getContentLength(); // global variable return wobj.getContentStream(); //returns 'is' stream } 200ms is quite expected from a network operation, and i am fine with it. BUT when i parse the inputStream in any way(read it into string/use java XML parser/read it into another ByteArrayStream) the process takes over 1000ms! for example, this code takes 1000ms IF i pass the stream i got('is') above from getContentStream() directly to this method: public static Document convertToXML(InputStream is) throws ParserConfigurationException, IOException, SAXException { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(is); doc.getDocumentElement().normalize(); return doc; } this code too, takes around 920ms IF the initial InputStream 'is' is passed in(dont read into the code itself - it just extracts the data i need by directly counting the characters, which can be done thanks to the rigid api feed format): public static parsedBattlePage convertBattleToXMLWithoutDOM(InputStream is) throws IOException { // Point A BufferedReader br = new BufferedReader(new InputStreamReader(is)); LinkedList ll = new LinkedList(); String str = br.readLine(); while (str != null) { ll.add(str); str = br.readLine(); } if (((String) ll.get(1)).indexOf("error") != -1) { return new parsedBattlePage(null, null, true, -1); } //Point B Iterator it = ll.iterator(); it.next(); it.next(); it.next(); it.next(); String[][] hits_arr = new String[1000][4]; String t_str = (String) it.next(); String tmp = null; int j = 0; for (int i = 0; t_str.indexOf("time") != -1; i++) { hits_arr[i][0] = t_str.substring(12, t_str.length() - 11); tmp = (String) it.next(); hits_arr[i][1] = tmp.substring(14, tmp.length() - 9); tmp = (String) it.next(); hits_arr[i][2] = tmp.substring(15, tmp.length() - 10); tmp = (String) it.next(); hits_arr[i][3] = tmp.substring(18, tmp.length() - 13); it.next(); it.next(); t_str = (String) it.next(); j++; } String[] b_info_arr = new String[9]; int[] space_nums = {13, 10, 13, 11, 11, 12, 5, 10, 13}; for (int i = 0; i < space_nums.length; i++) { tmp = (String) it.next(); b_info_arr[i] = tmp.substring(space_nums[i] + 4, tmp.length() - space_nums[i] - 1); } //Point C return new parsedBattlePage(hits_arr, b_info_arr, false, j); } I have tried replacing the default BufferedReader with BufferedReader br = new BufferedReader(new InputStreamReader(is), 250000); This didnt change much. My second try was to replace the code between A and B with: Iterator it = IOUtils.lineIterator(is, "UTF-8"); Same result, except this time A-B was 0ms, and B-C was 1000ms, so then every call to it.next() must have been consuming some significant time.(IOUtils is from apache-commons-io library). And here is the culprit - the time taken to parse the stream to string, be it by an iterator or BufferedReader in ALL cases was about 1000ms, while the rest of the code took 0ms(e.g. irrelevant). This means that parsing the stream to LinkedList, or iterating over it, for some reason was eating up a lot of my system resources. question was - why? Is it just the way java is made...no...thats just stupid, so I did another experiment. In my main method I added after the getWebPageAsStream(): //Point A ba = new byte[l]; // 'l' comes from wobj.getContentLength above bytesRead = is.read(ba); //'is' is our URLConnection original InputStream offset = bytesRead; while (bytesRead != -1) { bytesRead = is.read(ba, offset - 1, l - offset); offset += bytesRead; } //Point B InputStream is2 = new ByteArrayInputStream(ba); //Now just working with 'is2' - the "copied" stream The InputStream-byte[] conversion took again 1000ms - this is the way many ppl suggested to read an InputStream, and stil it is slow. And guess what - the 2 parser methods above (convertToXML() and convertBattlePagetoXMLWithoutDOM(), when passed 'is2' instead of 'is' took, in all 4 cases, under 50ms to complete. I read a suggestion that the stream waits for connection to close before unblocking, so i tried using HttpComponentsClient 4.0 (http://hc.apache.org/httpcomponents-client/index.html) instead, but the initial InputStream took just as long to parse. e.g. this code: public InputStream getWebPageAsStream2(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(url); HttpParams p = new BasicHttpParams(); HttpConnectionParams.setSocketBufferSize(p, 250000); HttpConnectionParams.setStaleCheckingEnabled(p, false); HttpConnectionParams.setConnectionTimeout(p, 5000); httpget.setParams(p); HttpResponse response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); l = (int) entity.getContentLength(); return entity.getContent(); } took even longer to process(50ms more for just the network) and the stream parsing times remained the same. Obviously it can be instantiated so as to not create HttpClient and properties every time(faster network time), but the stream issue wont be affected by that. So we come to the center problem - why does the initial URLConnection InputStream(or HttpClient InputStream) take so long to process, while any stream of same size and content created locally is orders of magnitude faster? I mean, the initial response is already somewhere in RAM, and I cant see any good reasong why it is processed so slowly compared to when a same stream is just created from a byte[]. Considering I have to parse million of entries and thousands of pages like that, a total processing time of almost 1.5s/page seems WAY WAY too long. Any ideas? P.S. Please ask in any more code is required - the only thing I do after parsing is make a PreparedStatement and put the entries into JavaDB in packs of 1000+, and the perfomance is ok ~ 200ms/1000entries, prb could be optimized with more cache but I didnt look into it much.

    Read the article

  • How to implement a network protocol?

    - by gotch4
    Here is a generic question. I'm not in search of the best answer, I'd just like you to express your favourite practices. I want to implement a network protocol in Java (but this is a rather general question, I faced the same issues in C++), this is not the first time, as I have done this before. But I think I am missing a good way to implement it. In fact usually it's all about exchanging text messages and some byte buffers between hosts, storing the status and wait until the next message comes. The problem is that I usually end up with a bunch of switch and more or less complex if statements that react to different statuses / messages. The whole thing usually gets complicated and hard to mantain. Not to mention that sometimes what comes out has some "blind spot", I mean statuses of the protocol that have not been covered and that behave in a unpredictable way. I tried to write down some state machine classes, that take care of checking start and end statuses for each action in more or less smart ways. This makes programming the protocol very complicated as I have to write lines and lines of code to cover every possible situation. What I'd like is something like a good pattern, or a best practice that is used in programming complex protocols, easy to mantain and to extend and very readable. What are your suggestions?

    Read the article

  • WIF using SAML 2 protocol / Federate AD FS 2.0 with CAS

    - by spa
    I'am are trying to implement a Web SSO with claim based identity using WIF and AD FS 2.0 right now. Right now I have a existing ASP.Net application which delegates authentification to the AD FS 2.0 server and trust issued security tokens. That works just fine. However, in the organization there is an existing JA-SIG Central Authentication Service (CAS) server which supports the SAML 2 protocol. I would like to replace AD FS 2.0 with the existing CAS service. In my understanding WIF uses WS-Federation, which is like a container around a SAML token. Is it possible to use the plain SAML 2 protocol and it's bindings (redirect or POST)? If that is not possible (as I guess), a second alternative might be to use federate identity and federate AD FS 2.0 with CAS. Is that possible? There is little to no information about that on the web. Thanks :-)

    Read the article

  • Frame Buffers wont work with pyglet.

    - by Matthew Mitchell
    I have this code: def setup_framebuffer(surface): #Create texture if not done already if surface.texture is None: create_texture(surface) #Render child to parent if surface.frame_buffer is None: surface.frame_buffer = glGenFramebuffersEXT(1) glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, surface.frame_buffer) glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, surface.texture, 0) glPushAttrib(GL_VIEWPORT_BIT) glViewport(0,0,surface._scale[0],surface._scale[1]) glMatrixMode(GL_PROJECTION) glLoadIdentity() #Load the projection matrix gluOrtho2D(0,surface._scale[0],0,surface._scale[1]) glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, surface.frame_buffer) for this despite the second parameter printing as 1 for a test I did, I get: glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, surface.frame_buffer) I only got this after implementing pyglet. GLUT is too limited. Thank you.

    Read the article

  • Error setting up thrift modules for python

    - by MMRUser
    Hi, I'm trying to set up thrift in order to incorporate with Cassandra, so when I ran the setup.py it out puts this message in command line running build running build_py running build_ext building 'thrift.protocol.fastbinary' extension C:\MinGW\bin\gcc.exe -mno-cygwin -mdll -O -Wall -IC:\Python26\include -IC:\Pytho n26\PC -c src/protocol/fastbinary.c -o build\temp.win32-2.6\Release\src\protocol \fastbinary.o src/protocol/fastbinary.c:24:24: netinet/in.h: No such file or directory src/protocol/fastbinary.c:85:4: #error "Cannot determine endianness" src/protocol/fastbinary.c: In function `writeI16': src/protocol/fastbinary.c:295: warning: implicit declaration of function `htons' src/protocol/fastbinary.c: In function `writeI32': src/protocol/fastbinary.c:300: warning: implicit declaration of function `htonl' src/protocol/fastbinary.c: In function `readI16': src/protocol/fastbinary.c:688: warning: implicit declaration of function `ntohs' src/protocol/fastbinary.c: In function `readI32': src/protocol/fastbinary.c:696: warning: implicit declaration of function `ntohl' error: command 'gcc' failed with exit status 1 Need some helping on this issue.I have already install the MigW32 Thanks.

    Read the article

  • Implement 3270 protocol in Java

    - by G B
    I've got a big problem with IBM HACL for accessing a server which speaks 3270 protocol. The library keeps crashing, and our JNI wrapper is actually a bug-fixing layer for the poorly-implemented and poorly-documented library (and I suspect we have introduced new bugs with it too). Moreover, in our company, everybody knows Java, and could maintain the software if we didn't have the JNI-Layer and the IBM class library. We have to use the C++ class library, because the IBM Java library is unusable: we get every non-printable character translated, and we lose all control characters along the way. Now the question is: can we ditch this library and implement our solution in Java completely (we'd like to avoid using another library from another vendor)? Is the protocol well documented? Is the implementation of 3270-over-ssl really so complex? Thanks.

    Read the article

  • C# DirectSound - Capture buffers not continuous

    - by Wizche
    Hi, I'm trying to capture raw data from my line-in using DirectSound. My problem is that, from a buffer to another the data are just inconsistent, if for example I capture a sine I see a jump from my last buffer and the new one. To detected this I use a graph widget to draw the first 500 elements of the last buffer and the 500 elements from the new one: Snapshot I initialized my buffer this way: format = new WaveFormat { SamplesPerSecond = 44100, BitsPerSample = (short)bitpersample, Channels = (short)channels, FormatTag = WaveFormatTag.Pcm }; format.BlockAlign = (short)(format.Channels * (format.BitsPerSample / 8)); format.AverageBytesPerSecond = format.SamplesPerSecond * format.BlockAlign; _dwNotifySize = Math.Max(4096, format.AverageBytesPerSecond / 8); _dwNotifySize -= _dwNotifySize % format.BlockAlign; _dwCaptureBufferSize = NUM_BUFFERS * _dwNotifySize; // my capture buffer _dwOutputBufferSize = NUM_BUFFERS * _dwNotifySize / channels; // my output buffer I set my notifications one at half the buffer and one at the end: _resetEvent = new AutoResetEvent(false); _notify = new Notify(_dwCapBuffer); bpn1 = new BufferPositionNotify(); bpn1.Offset = ((_dwCapBuffer.Caps.BufferBytes) / 2) - 1; bpn1.EventNotifyHandle = _resetEvent.SafeWaitHandle.DangerousGetHandle(); bpn2 = new BufferPositionNotify(); bpn2.Offset = (_dwCapBuffer.Caps.BufferBytes) - 1; bpn2.EventNotifyHandle = _resetEvent.SafeWaitHandle.DangerousGetHandle(); _notify.SetNotificationPositions(new BufferPositionNotify[] { bpn1, bpn2 }); observer.updateSamplerStatus("Events listener initialization complete!\r\n"); And here is how I process the events. /* Process thread */ private void eventReceived() { int offset = 0; _dwCaptureThread = new Thread((ThreadStart)delegate { _dwCapBuffer.Start(true); while (isReady) { _resetEvent.WaitOne(); // Notification received /* Read the captured buffer */ Array read = _dwCapBuffer.Read(offset, typeof(short), LockFlag.None, _dwOutputBufferSize - 1); observer.updateTextPacket("Buffer: " + count.ToString() + " # " + read.GetValue(read.Length - 1).ToString() + " # " + read.GetValue(0).ToString() + "\r\n"); /* Print last/new part of the buffer to the debug graph */ short[] graphData = new short[1001]; Array.Copy(read, graphData, 1000); db.SetBufferDebug(graphData, 500); observer.updateGraph(db.getBufferDebug()); offset = (offset + _dwOutputBufferSize) % _dwCaptureBufferSize; /* Out buffer not used */ /*_dwDevBuffer.Write(0, read, LockFlag.EntireBuffer); _dwDevBuffer.SetCurrentPosition(0); _dwDevBuffer.Play(0, BufferPlayFlags.Default);*/ } _dwCapBuffer.Stop(); }); _dwCaptureThread.Start(); } Any advise? I'm sure I'm failing somewhere in the event processing, but I cant find where. I had developed the same application using the WaveIn API and it worked well. Thanks a lot...

    Read the article

  • How should I handle incomplete packet buffers?

    - by Benjamin Manns
    I am writing a client for a server that typically sends data as strings in 500 or less bytes. However, the data will occasionally exceed that, and a single set of data could contain 200,000 bytes, for all the client knows (on initialization or significant events). However, I would like to not have to have each client running with a 50 MB socket buffer (if it's even possible). Each set of data is delimited by a null \0 character. What kind of structure should I look at for storing partially sent data sets? For example, the server may send ABCDEFGHIJKLMNOPQRSTUV\0WXYZ\0123!\0. I would want to process ABCDEFGHIJKLMNOPQRSTUV, WXYZ, and 123! independently. Also, the server could send ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890LOL123HAHATHISISREALLYLONG without the terminating character. I would want that data set stored somewhere for later appending and processing. Also, I'm using asynchronous socket methods (BeginSend, EndSend, BeginReceive, EndReceive) if that matters.

    Read the article

  • Optimizing sorting container of objects with heap-allocated buffers - how to avoid hard-copying buff

    - by Kache4
    I was making sure I knew how to do the op= and copy constructor correctly in order to sort() properly, so I wrote up a test case. After getting it to work, I realized that the op= was hard-copying all the data_. I figure if I wanted to sort a container with this structure (its elements have heap allocated char buffer arrays), it'd be faster to just swap the pointers around. Is there a way to do that? Would I have to write my own sort/swap function? #include <deque> //#include <string> //#include <utility> //#include <cstdlib> #include <cstring> #include <iostream> //#include <algorithm> // I use sort(), so why does this still compile when commented out? #include <boost/filesystem.hpp> #include <boost/foreach.hpp> using namespace std; namespace fs = boost::filesystem; class Page { public: // constructor Page(const char* path, const char* data, int size) : path_(fs::path(path)), size_(size), data_(new char[size]) { // cout << "Creating Page..." << endl; strncpy(data_, data, size); // cout << "done creating Page..." << endl; } // copy constructor Page(const Page& other) : path_(fs::path(other.path())), size_(other.size()), data_(new char[other.size()]) { // cout << "Copying Page..." << endl; strncpy(data_, other.data(), size_); // cout << "done copying Page..." << endl; } // destructor ~Page() { delete[] data_; } // accessors const fs::path& path() const { return path_; } const char* data() const { return data_; } int size() const { return size_; } // operators Page& operator = (const Page& other) { if (this == &other) return *this; char* newImage = new char[other.size()]; strncpy(newImage, other.data(), other.size()); delete[] data_; data_ = newImage; path_ = fs::path(other.path()); size_ = other.size(); return *this; } bool operator < (const Page& other) const { return path_ < other.path(); } private: fs::path path_; int size_; char* data_; }; class Book { public: Book(const char* path) : path_(fs::path(path)) { cout << "Creating Book..." << endl; cout << "pushing back #1" << endl; pages_.push_back(Page("image1.jpg", "firstImageData", 14)); cout << "pushing back #3" << endl; pages_.push_back(Page("image3.jpg", "thirdImageData", 14)); cout << "pushing back #2" << endl; pages_.push_back(Page("image2.jpg", "secondImageData", 15)); cout << "testing operator <" << endl; cout << pages_[0].path().string() << (pages_[0] < pages_[1]? " < " : " > ") << pages_[1].path().string() << endl; cout << pages_[1].path().string() << (pages_[1] < pages_[2]? " < " : " > ") << pages_[2].path().string() << endl; cout << pages_[0].path().string() << (pages_[0] < pages_[2]? " < " : " > ") << pages_[2].path().string() << endl; cout << "sorting" << endl; BOOST_FOREACH (Page p, pages_) cout << p.path().string() << endl; sort(pages_.begin(), pages_.end()); cout << "done sorting\n"; BOOST_FOREACH (Page p, pages_) cout << p.path().string() << endl; cout << "checking datas" << endl; BOOST_FOREACH (Page p, pages_) { char data[p.size() + 1]; strncpy((char*)&data, p.data(), p.size()); data[p.size()] = '\0'; cout << p.path().string() << " " << data << endl; } cout << "done Creating Book" << endl; } private: deque<Page> pages_; fs::path path_; }; int main() { Book* book = new Book("/some/path/"); }

    Read the article

  • Run a macro in all buffers in vim

    - by Caleb Huitt - cjhuitt
    I know about the :bufdo command, and was trying to combine it with a macro I had recorded (@a) to add a #include in the proper spot of each of the header files I'd loaded. However, I couldn't find an easy way to run the macro on each buffer. Is there a way to execute a macro through ex mode, which is what :bufdo requires? Or is there another command I'm missing?

    Read the article

  • mysql insert and buffers, is this possible

    - by Grumpy
    how is this possible first i do insert into table2 select * from table1 where table1.id=1 ( 50k records should be moved 6 indexes has to be updated ) second delete from table1 where id=1 ( 50k records are removed ) How is it possible that only 45k of records are moved? Im scratching my head over this and cant find a right answer Is it possible that the insert is still active and delete already started

    Read the article

  • Custom URL protocol in Windows to serve HTML content

    - by Jen
    This question addresses how to register a custom URL protocol to launch an application in response to a link, but I want my handler to serve dynamic content. Essentially, I'm looking to create a web application that runs on the user's machine instead of a web server. I could set up a localhost, but I want to use a "friendly" URL format that the user can reference elsewhere, e.g. a hypothetical cats protocol: cats:fluffy/cheeseburger-consumption-stats How can I accomplish this? Also, do you see any pitfalls with this approach, such as security warnings from browsers? Thanks!

    Read the article

  • VLC helper protocol on Mac OS X

    - by Preben
    Hey everybody, I am trying to add a vlc:// helper protocol on Mac OS X. To register the protocol, I have unsuccessfully been playing around with the MoreInternet PrefPane. What I want to have in my browser is a vlc://someressource.com/audio.mp3, which should launch VLC and add http://someressource.com/audio.mp3 to the playlist (this works fine on Windows and also Linux if I remember correctly). Maybe even just have vlc://http:// so that https would also be supported. I have no idea how to achieve this. I tried making a bash script, which MoreInternet would not accept. Then I tried making an application through Automator with my Bash script embedded. That did not work either, as the Automator application has no "creator code" - whatever that is?! Can any of you guys point me in the right direction? Thanks in advance!

    Read the article

  • Developing a project which is an implementation of an open standard/protocol

    - by dotnetdev
    Hi, A lot of interesting code/projects are implementations of protocols, eg SNMP. How are projects like these, which depend on implementing a certain format, developed? Is the process something like get the guidelines of the protocol and then implement code which follows it. For example, XML-RPC is about transmitting XML docs between client/server, so the documentation on this protocol must outline the structure of the XML documents and then the way the transportation between client and server works, so the coder will implement this sort of functionality (xml doc construction, networking between the client and server). Projects I am thinking of (not to develop) are C# libraries which can interpret .PSDs, make VHDs, etc. So if I was to develop a C# app to implement .AI files (Illustrator files), what would be the steps I would look at (such as contacting Adobe, etc)? Is this the way such projects are developed?

    Read the article

  • Why use buffers to read/write Streams

    - by James Hay
    Following reading various questions on reading and writing Streams, all the various answers define something like this as the correct way to do it: private void CopyStream(Stream input, Stream output) { byte[] buffer = new byte[16 * 1024]; int read; while ((read = input.Read(buffer, 0, buffer.Length)) > 0) { output.Write(buffer, 0, read); } } Two questions: Why read and write in these smaller chunks? What is the significance of the buffer size used?

    Read the article

  • How to map keys in vim differently for different kinds of buffers

    - by Yogesh Arora
    The problem i am facing is that i have mapped some keys and mouse events for seraching in vim while editing a file. But those mappings impact the functionality if the quickfix buffer. I was wondering if it is possible to map keys depending on the buffer in which they are used. EDIT - I am adding more info for this question Let us consider a scenario. I want to map <C-F4> to close a buffer/window. Now this behavior could depend on a number of things. If i am editing a buffer it should just close that buffer without changing the layout of the windows. I am using buffkil plugin for this. It does not depend on extension of file but on the type of buffer. I saw in vim documentation that there are unlisted and listed buffer. So if it is listed buffer it should close using bufkill commands. If it is not a listed buffer it should use <c-w>c command to close buffer and changing the window layout. I am new at writing vim functions/scripts, can someone help me getting started on this

    Read the article

  • PHP getting full server name including port number and protocol

    - by vivid-colours
    In PHP, is there a reliable and good way of getting these things: Protocol: i.e. http or https Servername: e.g. localhost Portnumber: e.g. 8080 I can get the server name using $_SERVER['SERVER_NAME']. I can kind of get the protocol but I don't think it's perfect: if(strtolower(substr($_SERVER["SERVER_PROTOCOL"],0,5))=='https') { return "https"; } else { return "http"; } I don't know how to get the port number though. The port numbers I am using are not 80.. they are 8080 and 8888. Thank you.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >