Search Results

Search found 19771 results on 791 pages for 'event stream'.

Page 303/791 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • C# performance methods of receiving data from a socket?

    - by Daniel
    Lets assume we have a simple internet socket, and its going to send 10 megabytes (because i want to ignore memory issues) of random data through. Is there any performance difference or a best practice method that one should use for receiving data? The final output data should be represented by a byte[]. Yes i know writing an arbitrary amount of data to memory is bad, and if I was downloading a large file i wouldn't be doing it like this. But for argument sake lets ignore that and assume its a smallish amount of data. I also realise that the bottleneck here is probably not the memory management but rather the socket receiving. I just want to know what would be the most efficient method of receiving data. A few dodgy ways can think of is: Have a List and a buffer, after the buffer is full, add it to the list and at the end list.ToArray() to get the byte[] Write the buffer to a memory stream, after its complete construct a byte[] of the stream.Length and read it all into it in order to get the byte[] output. Is there a more efficient/better way of doing this?

    Read the article

  • Java Input/Output streams for unnamed pipes created in native code?

    - by finrod
    Is there a way to easily create Java Input/Output streams for unnamed pipes created in native code? Motivation: I need my own implementation of the Process class. The native code spawns me a new process with subprocess' IO streams redirected to unnamed pipes. Problem: The file descriptors for correct ends of those pipes make their way to Java. At this point I get stuck as I cannot create a new FileDescriptor which I could pass to FileInput/FileOutput stream. I have used reflection to get around the problem and got communication with a simple bouncer sub-process running. However I have a notion that it is not the cleanest way to go. Have you used this approach? Do you see any problems with this approach? (the platform will never change) Searching around the internets revealed similar solution using native code. Any thoughts before I dive into heavy testing of this approach are very welcome. I would like to give a shot to existing code before writing my own IO stream implementations... Thank you.

    Read the article

  • Syncing two AS3 NetStreams

    - by Lowgain
    I'm writing an app that requires an audio stream to be recording while a backing track is played. I have this working, but there is an inconsistent gap in between playback and record starting. I don't know if I can do anything to make the sync perfect every time, so I've been trying to track what time each stream starts so I can calculate the delay and trim it server-side. This also has proved to be a challenge as no events seem to be sent when a connection starts (as far as I know). I've tried using various properties like the streams' buffer sizes, etc. I'm thinking now that as my recorded audio is only mono, I may be able to put some kind of 'control signal' on the second stereo track which I could use to determine exactly when a sound starts recording (or stick the whole backing track in that channel so I can sync them that way). This leaves me with the new problem of properly injecting this sound into the NetStream. If anyone has any idea whether or not any of these ideas will work, how to execute them, or some alternatives, that would be extremely helpful! Been working on this issue for awhile

    Read the article

  • ASP.NET Image Upload Parameter Not Valid. Exception

    - by pennylane
    Hi Guys, Im just trying to save a file to disk using a posted stream from jquery uploadify I'm also getting Parameter not valid. On adding to the error message so i can tell where it blew up in production im seeing it blow up on: var postedBitmap = new Bitmap(postedFileStream) any help would be most appreciated public string SaveImageFile(Stream postedFileStream, string fileDirectory, string fileName, int imageWidth, int imageHeight) { string result = ""; string fullFilePath = Path.Combine(fileDirectory, fileName); string exhelp = ""; if (!File.Exists(fullFilePath)) { try { using (var postedBitmap = new Bitmap(postedFileStream)) { exhelp += "got past bmp creation" + fullFilePath; using (var imageToSave = ImageHandler.ResizeImage(postedBitmap, imageWidth, imageHeight)) { exhelp += "got past resize"; if (!Directory.Exists(fileDirectory)) { Directory.CreateDirectory(fileDirectory); } result = "Success"; postedBitmap.Dispose(); imageToSave.Save(fullFilePath, GetImageFormatForFile(fileName)); } exhelp += "got past save"; } } catch (Exception ex) { result = "Save Image File Failed " + ex.Message + ex.StackTrace; Global.SendExceptionEmail("Save Image File Failed " + exhelp, ex); } } return result; }

    Read the article

  • c# Unable to open file for reading

    - by Maks
    I'm writing a program that uses FileSystemWatcher to monitor changes to a given directory, and when it recieves OnCreated or OnChanged event, it copies those created/changed files to a specified directorie(s). At first I had problems with the fact that OnChanged/OnCreated events can be sent twice (not acceptable in case it needed to process 500MB file) but I made a way around this and with what I'm REALLY STUCKED with is getting the following IOException: The process cannot access the file 'C:\Where are Photos\bookmarks (11).html' because it is being used by another process. Thus, preventing the program from copying all the files it should. So as I mentioned, when user uses this program he/she specifes monitored directory, when user copies/creates/changes file in that directory, program should get OnCreated/OnChanged event and then copy that file to few other directories. Above error happens in all casess, if user copies few files that needs to owerwrite other ones in folder being monitored or when copying bulk of several files or even sometimes when copying one file in a monitored directory. Whole program is quite big so I'm sending the most important parts. OnCreated: private void OnCreated(object source, FileSystemEventArgs e) { AddLogEntry(e.FullPath, "created", ""); // Update last access data if it's file so the same file doesn't // get processed twice because of sending another event. if (fileType(e.FullPath) == 2) { lastPath = e.FullPath; lastTime = DateTime.Now; } // serves no purpose now, it will be remove soon string fileName = GetFileName(e.FullPath); // copies file from source to few other directories Copy(e.FullPath, fileName); Console.WriteLine("OnCreated: " + e.FullPath); } OnChanged: private void OnChanged(object source, FileSystemEventArgs e) { // is it directory if (fileType(e.FullPath) == 1) return; // don't mind directory changes itself // Only if enough time has passed or if it's some other file // because two events can be generated int timeDiff = ((TimeSpan)(DateTime.Now - lastTime)).Seconds; if ((timeDiff < minSecsDiff) && (e.FullPath.Equals(lastPath))) { Console.WriteLine("-- skipped -- {0}, timediff: {1}", e.FullPath, timeDiff); return; } // Update last access data for above to work lastPath = e.FullPath; lastTime = DateTime.Now; // Only if size is changed, the rest will handle other handlers if (e.ChangeType == WatcherChangeTypes.Changed) { AddLogEntry(e.FullPath, "changed", ""); string fileName = GetFileName(e.FullPath); Copy(e.FullPath, fileName); Console.WriteLine("OnChanged: " + e.FullPath); } } fileType: private int fileType(string path) { if (Directory.Exists(path)) return 1; // directory else if (File.Exists(path)) return 2; // file else return 0; } Copy: private void Copy(string srcPath, string fileName) { foreach (string dstDirectoy in paths) { string eventType = "copied"; string error = "noerror"; string path = ""; string dirPortion = ""; // in case directory needs to be made if (srcPath.Length > fsw.Path.Length) { path = srcPath.Substring(fsw.Path.Length, srcPath.Length - fsw.Path.Length); int pos = path.LastIndexOf('\\'); if (pos != -1) dirPortion = path.Substring(0, pos); } if (fileType(srcPath) == 1) { try { Directory.CreateDirectory(dstDirectoy + path); //Directory.CreateDirectory(dstDirectoy + fileName); eventType = "created"; } catch (IOException e) { eventType = "error"; error = e.Message; } } else { try { if (!overwriteFile && File.Exists(dstDirectoy + path)) continue; // create new dir anyway even if it exists just to be sure Directory.CreateDirectory(dstDirectoy + dirPortion); // copy file from where event occured to all specified directories using (FileStream fsin = new FileStream(srcPath, FileMode.Open, FileAccess.Read, FileShare.Read)) { using (FileStream fsout = new FileStream(dstDirectoy + path, FileMode.Create, FileAccess.Write)) { byte[] buffer = new byte[32768]; int bytesRead = -1; while ((bytesRead = fsin.Read(buffer, 0, buffer.Length)) > 0) fsout.Write(buffer, 0, bytesRead); } } } catch (Exception e) { if ((e is IOException) && (overwriteFile == false)) { eventType = "skipped"; } else { eventType = "error"; error = e.Message; // attempt to find and kill the process locking the file. // failed, miserably System.Diagnostics.Process tool = new System.Diagnostics.Process(); tool.StartInfo.FileName = "handle.exe"; tool.StartInfo.Arguments = "\"" + srcPath + "\""; tool.StartInfo.UseShellExecute = false; tool.StartInfo.RedirectStandardOutput = true; tool.Start(); tool.WaitForExit(); string outputTool = tool.StandardOutput.ReadToEnd(); string matchPattern = @"(?<=\s+pid:\s+)\b(\d+)\b(?=\s+)"; foreach (Match match in Regex.Matches(outputTool, matchPattern)) { System.Diagnostics.Process.GetProcessById(int.Parse(match.Value)).Kill(); } Console.WriteLine("ERROR: {0}: [ {1} ]", e.Message, srcPath); } } } AddLogEntry(dstDirectoy + path, eventType, error); } } I checked everywhere in my program and whenever I use some file I use it in using block so even writing event to log (class for what I ommited since there is probably too much code already in post) wont lock the file, that is it shouldn't since all operations are using using statement block. I simply have no clue who's locking the file if not my program "copy" process from user through Windows or something else. Right now I have two possible "solutions" (I can't say they are clean solutions since they are hacks and as such not desireable). Since probably the problem is with fileType method (what else could lock the file?) I tried changing it to this, to simulate "blocking-until-ready-to-open" operation: fileType: private int fileType(string path) { FileStream fs = null; int ret = 0; bool run = true; if (Directory.Exists(path)) ret = 1; else { while (run) { try { fs = new FileStream(path, FileMode.Open); ret = 2; run = false; } catch (IOException) { } finally { if (fs != null) { fs.Close(); fs.Dispose(); } } } } return ret; } This is working as much as I could tell (test), but... it's hack, not to mention other deficients. The other "solution" I could try (I didn't test it yet) is using GC.Collect() somewhere at the end of fileType() method. Maybe even worse "solution" than previous one. Can someone pleas tell me, what on earth is locking the file, preventing it from opening and how can I fix that? What am I missing to see? Thanks in advance.

    Read the article

  • Best Functional Approach

    - by dbyrne
    I have some mutable scala code that I am trying to rewrite in a more functional style. It is a fairly intricate piece of code, so I am trying to refactor it in pieces. My first thought was this: def iterate(count:Int,d:MyComplexType) = { //Generate next value n //Process n causing some side effects return iterate(count - 1, n) } This didn't seem functional at all to me, since I still have side effects mixed throughout my code. My second thought was this: def generateStream(d:MyComplexType):Stream[MyComplexType] = { //Generate next value n return Stream.cons(n, generateStream(n)) } for (n <- generateStream(initialValue).take(2000000)) { //process n causing some side effects } This seemed like a better solution to me, because at least I've isolated my functional value-generation code from the mutable value-processing code. However, this is much less memory efficient because I am generating a large list that I don't really need to store. This leaves me with 3 choices: Write a tail-recursive function, bite the bullet and refactor the value-processing code Use a lazy list. This is not a memory sensitive app (although it is performance sensitive) Come up with a new approach. I guess what I really want is a lazily evaluated sequence where I can discard the values after I've processed them. Any suggestions?

    Read the article

  • F# ref-mutable vars vs object fields

    - by rwallace
    I'm writing a parser in F#, and it needs to be as fast as possible (I'm hoping to parse a 100 MB file in less than a minute). As normal, it uses mutable variables to store the next available character and the next available token (i.e. both the lexer and the parser proper use one unit of lookahead). My current partial implementation uses local variables for these. Since closure variables can't be mutable (anyone know the reason for this?) I've declared them as ref: let rec read file includepath = let c = ref ' ' let k = ref NONE let sb = new StringBuilder() use stream = File.OpenText file let readc() = c := stream.Read() |> char // etc I assume this has some overhead (not much, I know, but I'm trying for maximum speed here), and it's a little inelegant. The most obvious alternative would be to create a parser class object and have the mutable variables be fields in it. Does anyone know which is likely to be faster? Is there any consensus on which is considered better/more idiomatic style? Is there another option I'm missing?

    Read the article

  • Help file not working

    - by meryl
    Hi, can anyone help me ? I wanto to play an audio file and whenever I press the stop button , the already played part of the file should be saved. Unfortunately , what I get is an audio file (.wav) which actually is unplayable. Thanks //**************************** void play_cut() { try { // First, we get the format of the input file final AudioFileFormat.Type fileType = AudioSystem.getAudioFileFormat(inputAudio).getType(); // Then, we get a clip for playing the audio. c = AudioSystem.getClip(); // We get a stream for playing the input file. AudioInputStream ais = AudioSystem.getAudioInputStream(inputAudio); // We use the clip to open (but not start) the input stream c.open(ais); // We get the format of the audio codec (not the file format we got above) final AudioFormat audioFormat = ais.getFormat(); c.start(); AudioInputStream startStream = new AudioInputStream(new FileInputStream(inputAudio), audioFormat, c.getLongFramePosition()); AudioSystem.write(startStream, fileType, outputAudio); } catch (UnsupportedAudioFileException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (LineUnavailableException e) { e.printStackTrace(); } }// end play_cut //****************************

    Read the article

  • Android: Problems downloading images and converting to bitmaps

    - by Mike
    Hi all, I am working on an application that downloads images from a url. The problem is that only some images are being correctly downloaded and others are not. First off, here is the problem code: public Bitmap downloadImage(String url) { HttpClient client = new DefaultHttpClient(); HttpResponse response = null; try { response = client.execute(new HttpGet(url)); } catch (ClientProtocolException cpe) { Log.i(LOG_FILE, "client protocol exception"); return null; } catch (IOException ioe) { Log.i(LOG_FILE, "IOE downloading image"); return null; } catch (Exception e) { Log.i(LOG_FILE, "Other exception downloading image"); return null; } // Convert images from stream to bitmap object try { Bitmap image = BitmapFactory.decodeStream(response.getEntity().getContent()); if(image==null) Log.i(LOG_FILE, "image conversion failed"); return image; } catch (Exception e) { Log.i(LOG_FILE, "Other exception while converting image"); return null; } } So what I have is a method that takes the url as a string argument and then downloads the image, converts the HttpResponse stream to a bitmap by means of the BitmapFactory.decodeStream method, and returns it. The problem is that when I am on a slow network connection (almost always 3G rather than Wi-Fi) some images are converted to null--not all of them, only some of them. Using a Wi-Fi connection works perfectly; all the images are downloaded and converted properly. Does anyone know why this is happening? Or better, how can I fix this? How would I even go about testing to determine the problem? Any help is awesome; thank you!

    Read the article

  • public class ImageHandler : IHttpHandler

    - by Ken
    cmd.Parameters.AddWithValue("@id", new system.Guid (imageid)); What using System reference would this require? Here is the handler: using System; using System.Collections.Specialized; using System.Web; using System.Web.Configuration; using System.Web.Security; using System.Globalization; using System.Configuration; using System.Data.SqlClient; using System.Data; using System.IO; using System.Web.Profile; using System.Drawing; public class ImageHandler : IHttpHandler { public void ProcessRequest(HttpContext context) { string imageid; if (context.Request.QueryString["id"] != null) imageid = (context.Request.QueryString["id"]); else throw new ArgumentException("No parameter specified"); context.Response.ContentType = "image/jpeg"; Stream strm = ShowProfileImage(imageid.ToString()); byte[] buffer = new byte[8192]; int byteSeq = strm.Read(buffer, 0, 8192); while (byteSeq > 0) { context.Response.OutputStream.Write(buffer, 0, byteSeq); byteSeq = strm.Read(buffer, 0, 8192); } //context.Response.BinaryWrite(buffer); } public Stream ShowProfileImage(String imageid) { string conn = ConfigurationManager.ConnectionStrings["MyConnectionString1"].ConnectionString; SqlConnection connection = new SqlConnection(conn); string sql = "SELECT image FROM Profile WHERE UserId = @id"; SqlCommand cmd = new SqlCommand(sql, connection); cmd.CommandType = CommandType.Text; cmd.Parameters.AddWithValue("@id", new system.Guid (imageid));//Failing Here!!!! connection.Open(); object img = cmd.ExecuteScalar(); try { return new MemoryStream((byte[])img); } catch { return null; } finally { connection.Close(); } } public bool IsReusable { get { return false; } } }

    Read the article

  • Downloading a webpage in C# example

    - by Chris
    I am trying to understand some example code on this web page: (http://www.csharp-station.com/HowTo/HttpWebFetch.aspx) that downloads a file from the internet. The piece of code quoted below goes through a loop getting chunks of data and saving them to a string until all the data has been downloaded. As I understand it, "count" contains the size of the downloaded chunk and the loop runs until count is 0 (an empty chunk of data is downloaded). My question is, isn't it possible that count could be 0 without the file being completely downloaded? Say if the network connection is interrupted, the stream may not have any data to read on a pass of the loop and count should be 0, ending the download prematurely. Or does ResStream.Read stop the program until it gets data? Is this the correct way to save a stream? int count = 0; do { // fill the buffer with data count = resStream.Read(buf, 0, buf.Length); // make sure we read some data if (count != 0) { // translate from bytes to ASCII text tempString = Encoding.ASCII.GetString(buf, 0, count); // continue building the string sb.Append(tempString); } } while (count 0); // any more data to read?

    Read the article

  • suppress error using fread()

    - by Mikey1980
    I wrote a script for screen pops from our soft phone that locates a directory listing for the caller but occasionally they get "Can't read input stream" and the rest of the script quits. Does anyone have any suggestions on how to suppress error the error message and allow the rest of the script to run? Thanks! $i=0; $open = fopen("http://www.411.ca/whitepages/?n=".$_GET['phone'], "r"); $read = fread($open, 9024); fclose($open); eregi("'/(.*)';",$read,$got); $tv = ereg_replace('[[:blank:]]',' ',$got[1]); $url = "http://www.411.ca/".$tv; while ($name=="unknown" && $i < 15) { ## try 15 times before giving up $file = @ fopen($fn=$url,"r") or die ("Can't read input stream"); $text = fread($file,16384); if (preg_match('/"name">(.*?)<\/div>/is',$text,$found)) { $name = $found[1]; } if (preg_match('/"phone">(.*?)<\/div>/is',$text,$found)) { $phone = $found[1]; } if (preg_match('/"address">(.*?)<\/div>/is',$text,$found)) { $address = $found[1]; } fclose($file); $i++; }

    Read the article

  • Ftp throws WebException only in .NET 4.0

    - by Trakhan
    I have following c# code. It runs just fine when compiled against .NET framework 3.5 or 2.0 (I did not test it against 3.0, but it will most likely work too). The problem is, that it fails when built against .NET framework 4.0. FtpWebRequest Ftp = (FtpWebRequest)FtpWebRequest.Create(Url_ + '/' + e.Name); Ftp.Method = WebRequestMethods.Ftp.UploadFile; Ftp.Credentials = new NetworkCredential(Login_, Password_); Ftp.UseBinary = true; Ftp.KeepAlive = false; Ftp.UsePassive = true; Stream S = Ftp.GetRequestStream(); byte[] Content = null; bool Continue = false; do { try { Continue = false; Content = File.ReadAllBytes(e.FullPath); } catch (Exception) { Continue = true; System.Threading.Thread.Sleep(500); } } while (Continue); S.Write(Content, 0, Content.Length); S.Close(); FtpWebResponse Resp = (FtpWebResponse)Ftp.GetResponse(); if (Resp.StatusCode != FtpStatusCode.CommandOK) Console.WriteLine(Resp.StatusDescription); The problem is in call Stream S = Ftp.GetRequestStream();, which throws an en instance of WebException with message “The remote server returned an error: (500) Syntax error, command unrecognized”. Does anybody know why it is so? PS. I communicate virtual ftp server in ECM Alfresco.

    Read the article

  • using ftpWebRequest with an error: the remote server returned error 530 not loged in

    - by user1361207
    I am trying to use the ftpWebRequest in c# my code is // Get the object used to communicate with the server. FtpWebRequest request = (FtpWebRequest)WebRequest.Create("ftp://192.168.20.10/file.txt"); request.Method = WebRequestMethods.Ftp.UploadFile; // This example assumes the FTP site uses anonymous logon. request.Credentials = new NetworkCredential("dev\ftp", "devftp"); // Copy the contents of the file to the request stream. StreamReader sourceStream = new StreamReader(@"\file.txt"); byte[] fileContents = Encoding.UTF8.GetBytes(sourceStream.ReadToEnd()); sourceStream.Close(); request.ContentLength = fileContents.Length; request.UsePassive = true; Stream requestStream = request.GetRequestStream(); requestStream.Write(fileContents, 0, fileContents.Length); requestStream.Close(); FtpWebResponse response = (FtpWebResponse)request.GetResponse(); Console.WriteLine("Upload File Complete, status {0}", response.StatusDescription); response.Close(); and I get an Error in request.GetRequestStream(); the error is: the remote server returned error 530 not loged in if I try to go in to a browser page and in the url I write ftp://192.168.20.10/ the brows page is asking me for a name and password, I put the same name and password and I see all the files and folders in the ftp folder. can any one help me with this problem?

    Read the article

  • std::ifstream buffer caching

    - by ledokol
    Hello everybody, In my application I'm trying to merge sorted files (keeping them sorted of course), so I have to iterate through each element in both files to write the minimal to the third one. This works pretty much slow on big files, as far as I don't see any other choice (the iteration has to be done) I'm trying to optimize file loading. I can use some amount of RAM, which I can use for buffering. I mean instead of reading 4 bytes from both files every time I can read once something like 100Mb and work with that buffer after that, until there will be no element in buffer, then I'll refill the buffer again. But I guess ifstream is already doing that, will it give me more performance and is there any reason? If fstream does, maybe I can change size of that buffer? added My current code looks like that (pseudocode) // this is done in loop int i1 = input1.read_integer(); int i2 = input2.read_integer(); if (!input1.eof() && !input2.eof()) { if (i1 < i2) { output.write(i1); input2.seek_back(sizeof(int)); } else input1.seek_back(sizeof(int)); output.write(i2); } } else { if (input1.eof()) output.write(i2); else if (input2.eof()) output.write(i1); } What I don't like here is seek_back - I have to seek back to previous position as there is no way to peek 4 bytes too much reading from file if one of the streams is in EOF it still continues to check that stream instead of putting contents of another stream directly to output, but this is not a big issue, because chunk sizes are almost always equal. Can you suggest improvement for that? Thanks.

    Read the article

  • Parsing data from txt file in J2ME

    - by CSFYPMAIL
    Basically I'm creating an indoor navigation system in J2ME. I've put the location details in a .txt file i.e. Locations names and their coordinates. Edges with respective start node and end node as well as the weight (length of the node). I put both details in the same file so users dont have to download multiple files to get their map working (it could become time consuming and seem complex). So what i did is to seperate the deferent details by typing out location Names and coordinates first, After that I seperated that section from the next section which is the edges by drawing a line with multiple underscores. Now the problem I'm having is parsing the different details into seperate arrays by setting up a command (while manually tokenizing the input stream) to check wether the the next token is an underscore. If it is, (in pseudocode terms), move to the next line in the stream, create a new array and fill it up with the next set of details. I found a some explanation/code HERE that does something similar but still parses into one array, although it manually tokenizes the input. Any ideas on what to do? Thanks Text File Explanation The text has the following format... <--1stSection--  /**   * Section one has the following format   * xCoordinate;yCoordinate;LocationName   */ 12;13;New York City 40;12;Washington D.C. ...e.t.c _________________________ <--(underscore divider) <--2ndSection--  /**   * Its actually an adjacency list but indirectly provides "edge" details.   * Its in this form   * StartNode/MainReferencePoint;Endnode1;distance2endNode1;Endnode2;distance2endNode2;...e.t.c   */ philadelphia;Washington D.C.;7;New York City;2 New York City;Florida;24;Illinois;71 ...e.t.c

    Read the article

  • Protocol specification in XML

    - by Mathijs
    Is there a way to specify a packet-based protocol in XML, so (de)serialization can happen automatically? The context is as follows. I have a device that communicates through a serial port. It sends and receives a byte stream consisting of 'packets'. A packet is a collection of elementary data types and (sometimes) other packets. Some elements of packets are conditional; their inclusion depends on earlier elements. I have a C# application that communicates with this device. Naturally, I don't want to work on a byte-level throughout my application; I want to separate the protocol from my application code. Therefore I need to translate the byte stream to structures (classes). Currently I have implemented the protocol in C# by defining a class for each packet. These classes define the order and type of elements for each packet. Making class members conditional is difficult, so protocol information ends up in functions. I imagine XML that looks like this (note that my experience designing XML is limited): <packet> <field name="Author" type="int32" /> <field name="Nickname" type="bytes" size="4"> <condition type="range"> <field>Author</field> <min>3</min> <max>6</min> </condition> </field> </packet> .NET has something called a 'binary serializer', but I don't think that's what I'm looking for. Is there a way to separate protocol and code, even if packets 'include' other packets and have conditional elements?

    Read the article

  • Benefits of "Don't Fragment" on TCP Packets?

    - by taspeotis
    One of our customers is having trouble submitting data from our application (on their PC) to a server (different geographical location). When sending packets under 1100 bytes everything works fine, but above this we see TCP retransmitting the packet every few seconds and getting no response. The packets we are using for testing are about 1400 bytes (but less than 1472). I can send an ICMP ping to www.google.com that is 1472 bytes and get a response (so it's not their router/first few hops). I found that our application sets the DF flag for these packets, and I believe a router along the way to the server has an MTU less than/equal to 1100 and dropping the packet. This affects 1 client in 5000, but since everybody's routes will be different this is expected. The data is a SOAP envelope and we expect a SOAP response back. I can't justify WHY we do it, the code to do this was written by a previous developer. So... Are there are benefits OR justification to setting the DF flag on TCP packets for application data? I can think of reasons it is needed for network diagnostics applications but not in our situation (we want the data to get to the endpoint, fragmented or not). One of our sysadmins said that it might have something to do with us using SSL, but as far as I know SSL is like a stream and regardless of fragmentation, as long as the stream is rebuilt at the end, there's no problem. If there's no good justification I will be changing the behaviour of our application. Thanks in advance.

    Read the article

  • Strange thread behavior in Perl

    - by Zaid
    Tom Christiansen's example code (à la perlthrtut) is a recursive, threaded implementation of finding and printing all prime numbers between 3 and 1000. Below is a mildly adapted version of the script #!/usr/bin/perl # adapted from prime-pthread, courtesy of Tom Christiansen use strict; use warnings; use threads; use Thread::Queue; sub check_prime { my ($upstream,$cur_prime) = @_; my $child; my $downstream = Thread::Queue->new; while (my $num = $upstream->dequeue) { next unless ($num % $cur_prime); if ($child) { $downstream->enqueue($num); } else { $child = threads->create(\&check_prime, $downstream, $num); if ($child) { print "This is thread ",$child->tid,". Found prime: $num\n"; } else { warn "Sorry. Ran out of threads.\n"; last; } } } if ($child) { $downstream->enqueue(undef); $child->join; } } my $stream = Thread::Queue->new(3..shift,undef); check_prime($stream,2); When run on my machine (under ActiveState & Win32), the code was capable of spawning only 118 threads (last prime number found: 653) before terminating with a 'Sorry. Ran out of threads' warning. In trying to figure out why I was limited to the number of threads I could create, I replaced the use threads; line with use threads (stack_size => 1);. The resultant code happily dealt with churning out 2000+ threads. Can anyone explain this behavior?

    Read the article

  • send and receive in socket [closed]

    - by user3696492
    I have trouble in sending an object through socket in c#, my client can send to server but server can't send to client, i think there is something wrong with the client. Server private void Form1_Load(object sender, EventArgs e) { CheckForIllegalCrossThreadCalls = false; Thread a = new Thread(connect); a.Start(); } private void sendButton_Click(object sender, EventArgs e) { client.Send(SerializeData(ShapeList[ShapeList.Count - 1])); } void connect() { try { server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); iep = new IPEndPoint(IPAddress.Parse("127.0.0.1"), 5555); server.Bind(iep); server.Listen(10); client = server.Accept(); while (true) { byte[] data = new byte[1024]; client.Receive(data); PaintObject a = (PaintObject)DeserializeData(data); ShapeList.Add(a); Invalidate(); } } catch (Exception ex) { MessageBox.Show(ex.Message); } } client private void Form1_Load(object sender, EventArgs e) { CheckForIllegalCrossThreadCalls = false; Thread a = new Thread(connect); a.Start(); } private void SendButton_Click(object sender, EventArgs e) { client.Send(SerializeData(ShapeList[ShapeList.Count - 1])); } void connect() { try { client = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); iep = new IPEndPoint(IPAddress.Parse("127.0.0.1"), 5555); client.Connect(iep); while (true) { byte[] data = new byte[1024]; client.Receive(data); PaintObject a = (PaintObject)DeserializeData(data); ShapeList.Add(a); Invalidate(); } } catch (Exception ex) { MessageBox.Show(ex.Message); } }

    Read the article

  • Problem reading in file written with xdr using c

    - by Inga
    I am using Ubuntu 10.4 and have two (long) C programs, one that writes a file using XDR, and one that uses this file as input. However, the second program does not manage to read in the written file. Everything looks perfectly fine, it just does not work. More spesifically it fails at the last line added here with the error message xdr_string(), which indicates that it can not read in the first line of the input file. I do no see any obvious errors. The input file is written out, have a content and I can see the right strings using stings -a -n 2 "inputfile". Anyone have any idea what is going wrong? Relevant parts of program 1 (writer): /** * create compressed XDR output stream */ output_file=open_write_pipe(output_filename); xdrstdio_create(&xdrs, output_file, XDR_ENCODE); /** * print material name */ if( xdr_string(&xdrs, &name, _POSIX_NAME_MAX) == FALSE ) xdr_err("xdr_string()"); Relevant parts of program 2 (reader): /** * open data file */ input_file=open_data_file(input_filename, "r"); if( input_file == NULL ){ ERROR(input_filename); exit(EXIT_FAILURE); } /** * create input XDR stream */ xdrstdio_create(&xdrs, input_file, XDR_DECODE); /** * read material name */ if(xdr_string(&xdrs, &name, _POSIX_NAME_MAX) == FALSE) XDR_ERR("xdr_string()");

    Read the article

  • How to strip out 0x0a special char from utf8 file using c# and keep file as utf8?

    - by user1013388
    The following is a line from a UTF-8 file from which I am trying to remove the special char (0X0A), which shows up as a black diamond with a question mark below: 2464577 ????? True s6620178 Unspecified <1?1009-672 This is generated when SSIS reads a SQL table then writes out, using a flat file mgr set to code page 65001. When I open the file up in Notepad++, displays as 0X0A. I'm looking for some C# code to definitely strip that char out and replace it with either nothing or a blank space. Here's what I have tried: string fileLocation = "c:\\MyFile.txt"; var content = string.Empty; using (StreamReader reader = new System.IO.StreamReader(fileLocation)) { content = reader.ReadToEnd(); reader.Close(); } content = content.Replace('\u00A0', ' '); //also tried: content.Replace((char)0X0A, ' '); //also tried: content.Replace((char)0X0A, ''); //also tried: content.Replace((char)0X0A, (char)'\0'); Encoding encoding = Encoding.UTF8; using (FileStream stream = new FileStream(fileLocation, FileMode.Create)) { using (BinaryWriter writer = new BinaryWriter(stream, encoding)) { writer.Write(encoding.GetPreamble()); //This is for writing the BOM writer.Write(content); } } I also tried this code to get the actual string value: byte[] bytes = { 0x0A }; string text = Encoding.UTF8.GetString(bytes); And it comes back as "\n". So in the code above I also tried replacing "\n" with " ", both in double quotes and single quotes, but still no change. At this point I'm out of ideas. Anyone got any advice? Thanks.

    Read the article

  • How can I record and save flash video's with an H264 codec

    - by JimHendriks
    Right now I am working on an AIR 3.2 application which lets you stream a video to a Flash Media Server and saves it on a hard drive. This sequence works fine with the standard Sorenson codec but I want to use H.264 for my videos. I found lots example c ode and implemented it in my code, but when I record a video of myself I am unable to re-watch it afterwards. I found how to implement a H.264 encoding in a realeyes blog post here. My code is here. It saves the video as a .f4v file, but my browser (I've tried the latest versions of both Chrome and Firefox, with the latest Flash) and also VLC are unable to load the video. I also used a program called Movie Player which is able to open the file but can only show the first frame and the audio. Neither am I able to upload the video to YouTube because they do not support the file extension. Here is an example video file it saved: H264Test1.f4v. My question is: How do I stream and save the movie with a file extension that I am able to re-watch while using the H.264 codec?

    Read the article

  • problems piping in node.js

    - by alvaizq
    We have the following example in node.js var http = require('http'); http.createServer(function(request, response) { var proxy = http.createClient(8083, '127.0.0.1') var proxy_request = proxy.request(request.method, request.url, request.headers); proxy_request.on('response', function (proxy_response) { proxy_response.pipe(response); response.writeHead(proxy_response.statusCode, proxy_response.headers); }); setTimeout(function(){ request.pipe(proxy_request); },3000); }).listen(8081, '127.0.0.1'); The example listen to a request in 127.0.0.1:8081 and sends it to a dummy server (always return 200 OK status code) in 127.0.0.1:8083. The problem is in the pipe among the input stream (readable) and output stream (writable) when we have a async module before (in this case the setTimeOut timing). The pipe doesn't work and nothing is sent to dummy server in 8083 port. Maybe, when we have a async call (in this case the setTimeOut) before the pipe call, the inputstream change to a state "not readable", and after the async call the pipe doesn't send anything. This is just an example...we test it with more async modules from node.js community with the same result (ldapjs, etc)... We try to fix it with: - request.readable =true; //before pipe call - request.pipe(proxy_request, {end : false}); with the same result (the pipe doesn't work). Can anybody help us? Many thanks in advanced and best regards,

    Read the article

  • Using Threads to Handle Sockets

    - by user340468
    I am working on a java program that is essentially a chat room. This is an assignment for class so no code please, I am just having some issues determining the most feasible way to handle what I need to do. I have a server program already setup for a single client using threads to get the data input stream and a thread to handle sending on the data output stream. What I need to do now is create a new thread for each incoming request. My thought is to create a linked list to contain either the client sockets, or possibly the thread. Where I am stumbling is figuring out how to handle sending the messages out to all the clients. If I have a thread for each incoming message how can I then turn around and send that out to each client socket. I'm thinking that if I had a linkedlist of the clientsockets I could then traverse the list and send it out to each one, but then I would have to create a dataoutputstream each time. Could I create a linkedlist of dataoutputstreams? Sorry if it sounds like I'm rambling but I don't want to just start coding this, it could get messy without a good plan. Thanks!

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >