Search Results

Search found 3754 results on 151 pages for 'vertex buffer'.

Page 117/151 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Upgraded activerecord-sqlserver-adapter from 2.2.22 to 2.3.8 and now getting an ODBC error

    - by stuartc
    I have been using MSSQL 2005 with Rails for quite a while now, and decided to bump my gems up on one of my projects and ran into a problem. I moved from 2.2.22 to 2.3.8 (latest as of writing) and all of a sudden I got this: ODBC::Error: S1090 (0) [unixODBC][Driver Manager]Invalid string or buffer length I'm using a DSN connection with FreeTDS my database.yml looks like this: adapter: sqlserver mode: ODBC dsn: 'DRIVER=FreeTDS;TDSVER=7.0;SERVER=10.0.0.5;DATABASE=db;Port=1433;UID=user;PWD=pwd;' Now in the mean time I moved back to 2.2.22 and there are no deprecation warnings and everything seems fine but obviously for the sake of being up to date, any ideas what could have changed in the adaptor that could cause this?

    Read the article

  • Django gives "I/O operation on closed file" error when reading from a saved ImageField

    - by Rob Osborne
    I have a model with two image fields, a source image and a thumbnail. When I update the new source image, save it and then try to read the source image to crop/scale it to a thumbnail I get an "I/O operation on closed file" error from PIL. If I update the source image, don't save the source image, and then try to read the source image to crop/scale, I get an "attempting to read from closed file" error from PIL. In both cases the source image is actually saved and available in later request/response loops. If I don't crop/scale in a single request/response loop but instead upload on one page and then crop/scale in another page this all works fine. This seems to be a cached buffer being reused some how, either by PIL or by the Django file storage. Any ideas on how to make an ImageField readable after saving?

    Read the article

  • How to reliably measure available memory in Linux?

    - by Alex B
    Linux /proc/meminfo shows a number of memory usage statistics. MemTotal: 4040732 kB MemFree: 23160 kB Buffers: 163340 kB Cached: 3707080 kB SwapCached: 0 kB Active: 1129324 kB Inactive: 2762912 kB There is quite a bit of overlap between them. For example, as far as I understand, there can be active page cache (belongs to "cached" and "active") and inactive page cache ("inactive" + "cached"). What I want to do is to measure "free" memory, but in a way that it includes used pages that are likely to be dropped without a significant impact on overall system's performance. At first, I was inclined to use "free" + "inactive", but Linux's "free" utility uses "free" + "cached" in its "buffer-adjusted" display, so I am curious what a better approach is. When the kernel runs out of memory, what is the priority of pages to drop and what is the more appropriate metric to measure available memory?

    Read the article

  • ArrayList<String> NullPointerException

    - by Carlucho
    Am trying to solve a labyrinth by DFS, using adj List to represent the vertices and edges of the graph. In total there are 12 nodes (3 rows[A,B,C] * 4 cols[0,..,3]). My program starts by saving all the vertex labels (A0,..C3), so far so good, then checks the adjacent nodes, also no problems, if movement is possible, it proceeds to create the edge, here its where al goes wrong. adjList[i].add(vList[j].label); I used the debugger and found that vList[j].label is not null it contains a correct string (ie. "B1"). The only variables which show null are in adjList[i], which leads me to believe i have implemented it wrongly. this is how i did it. public class GraphList { private ArrayList<String>[] adjList; ... public GraphList(int vertexcount) { adjList = (ArrayList<String>[]) new ArrayList[vertexCount]; ... } ... public void addEdge(int i, int j) { adjList[i].add(vList[j].label); //NULLPOINTEREXCEPTION HERE } ... } I will really appreaciate if anyone can point me on the right track regrading to what its going wrong... Thanks!

    Read the article

  • How to setup/calculate texturebuffer in glTexCoordPointer when importing from OBJ-file

    - by JohnMurdoch
    Hi all, I'm parsing an OBJ-file in Android and my goal is to render & display the object. Everything works fine except the correct texture mapping (importing the resource/image into opengl etc works fine). I don't know how to populate the texture related data from the obj-file into an texturebuffer-object. In the OBJ-file I've vt-lines: vt 0.495011 0.389417 vt 0.500686 0.561346 and face-lines: f 127/73/62 98/72/62 125/75/62 My draw-routine looks like (only relevant parts): gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_NORMAL_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); gl.glNormalPointer(GL10.GL_FLOAT, 0, normalsBuffer); gl.glTexCoordPointer(2, GL10.GL_SHORT, 0, t.getvtBuffer()); gl.glDrawElements(GL10.GL_TRIANGLES, t.getFacesCount(), GL10.GL_UNSIGNED_SHORT, t.getFaceBuffer()); Output of the counts of the OBJ-file: Vertex-count: 1023 Vns-count: 1752 Vts-count: 524 ///////////////////////// Part 0 Material name:default Number of faces:2037 Number of vnPointers:2037 Number of vtPointers:2037 Any advise is welcome.

    Read the article

  • Maximum Possible File Name Length in Windows Kernel

    - by Lambert
    I was wondering, what is the longest possible name length allowed by the Windows kernel? E.g.: I know the kernel uses UNICODE_STRING structures to hold all object paths, and since the byte length of a wide-character string is stored inside a USHORT, that allows for a maximum path length of 2^15 - 1 characters. Is there a similar, hard restriction on a file name (rather than path)? (I don't care if NTFS or FAT32 imposes a particular restriction; I'm looking for the longest possible theoretically allowed name in the kernel, assuming no additional file system or shell restrictions.) (Edit: For those wondering why this even matters, consider that normally, traversing a directory is achieved by FindFirstFile/FindNextFile calls, one call per file. Given the function named NtQueryDirectoryFile, which is the underlying system call and which returns multiple file names per call, it's actually possible to take advantage of this maximum-length restriction on the path to make an extremely-fast directory traverser that uses solely the stack as a buffer. Now I'm trying to extend that concept, and I need to know the maximum size of a file name.)

    Read the article

  • Using slime's C-x C-e (Eval the form under the point) with swank-clojure in emacs

    - by hiheelhottie
    Hi, I'm using swank-clojure in emacs on OSX. I'm able to run a slime session. When I use C-x C-e on a simple form in a .clj file like (+ 7 7) I get an sldb buffer with Unable to resolve symbol: + in this context [Thrown class java.lang.Exception] I'm able to evaluate that form in the slime session directly. I was hoping the form in the clj file would get evaluated in the running slime session. Can someone explain how C-x C-e works in swank-clojure and how I can get the form to be evaluated in the running slime session? Thanks, hhh

    Read the article

  • Can a client determine whether the server has accept()'d a unix socket?

    - by Havoc P
    I'm dealing with a buggy server that will sometimes fail to accept() connections (but leaves its listening socket open). This is on Linux with unix domain sockets. Currently the only way to detect this is that after sending a bunch of data, the buffer fills up and blocks, and the server isn't sending any replies. This long-after-the-fact failure mode is hard to distinguish from other bugs - the server could be unresponsive for other reasons. Especially for unix domain sockets it seems the kernel should know whether accept() has occurred; is there any way to find this out? Can the client block until accept() happens somehow, or at least check whether it has? This is just for debugging purposes so it can be a little ugly.

    Read the article

  • Writing my own iostream utility class: Is this a good idea?

    - by Alex
    I have an application that wants to read word by word, delimited by whitespace, from a file. I am using code along these lines: std::istream in; string word; while (in.good()) { in>>word; // Processing, etc. ... } My issue is that the processing on the words themselves is actually rather light. The major time consumer is a set of mySQL queries I run. What I was thinking is writing a buffered class that reads something like a kilobyte from the file, initializes a stringstream as a buffer, and performs extraction from that transparently to avoid a great many IO operations. Thoughts and advice?

    Read the article

  • how can read data in image uri

    - by satyamurthy
    hi sir i am implementing image upload then i got image uri how can read data in image uri File Img = new File(selectedImage.getPath()+inFileType); System.out.println("2............."+Img); FileInputStream is = null; try { is = new FileInputStream(Img); is.read(buffer); BufferedInputStream bis = new BufferedInputStream(is); Bitmap bm = BitmapFactory.decodeStream(is); bis.close(); is.close(); this code implementing i got uri how can read data

    Read the article

  • nxhtml and geben: debug mode stops responding to keystrokes upon step into html/php mixed line

    - by artistoex
    I'm using the php debugger geben and nxhtml-mode. While debugging, as soon as I step into a mixed line such as <foo><?php bar(); ?></foo> the debugger is no longer accepting any key-strokes. However, the mode line still indicates the debugger's presence (*debugging*'-entry). I guess this due to nxhtml's mode changes, because it's the exact same behavior geben shows after disabling end re-enabling it. Does anybody use nxhtml together with geben and has fixed it? Or is it possible to configure emacs to enable nxhtml conditionaly, such that php-mode is used instead when the buffer was opened by geben?

    Read the article

  • Python IPC, popen too slow

    - by UnableToLoad
    i need to run a subprocess (./myProgram) form python script and get output, actually i do this: import subprocess proc = subprocess.Popen('./generate_out', shell=False, stdout=subprocess.PIPE, ) while proc.poll() is None: out = proc.stdout.readline() data = doStuff(out) print(data) but is slow, sometimes pass a lot of time between the output produced by ./generate_out and the print(data), knowing that my doStuff() function is very fast, i think there is some buffer slowing down my pipe... Notes: ./generate_out, generates potentially an unlimited number of lines of finite length each. It seems that when too few chars are put in the pipe between the two processes nothing happens, then when enough is produced i get a huge print (non the expected behaviour!) sometimes i wait many seconds (10-20 and more) between generate_out print and python print) what can i do? maybe communicate() is faster? anithing else? Thank you a lot!

    Read the article

  • Instanced drawing with OpenGL ES 2.0

    - by Mårten Wikström
    In short: Is it possible to use the gl_InstanceID built-in variable in OpenGL ES 2.0? And, if so, how? Some more info: I want to draw multiple instances of an object using glDrawArraysInstanced and gl_InstanceID, and I want my application to run on multiple platforms, including iOS. The specification clearly says that these features require ES 3.0. According to the iOS Device Compatibility Reference ES 3.0 is only available on a few devices (those based on the A7 GPU; so iPhone 5s, but not on iPhone 5 or earlier). So my first assumption was that I needed to avoid using instanced drawing on older iOS devices. However, further down in the compatibility reference document it says that the EXT_draw_instanced extension is supported for all SGX Series 5 processors (that includes iPhone 5 and 4s). This makes me think that I could indeed use instanced drawing on older iOS devices too, by looking up and using the appropriate extension function (EXT or ARB) for glDrawArraysInstanced. I'm currently just running some test code using SDL and GLEW on Windows so I haven't tested anything on iOS yet. However, in my current setup I'm having trouble using the gl_InstanceID built-in variable in a vertex shader. I'm getting the following error message: 'gl_InstanceID' : variable is not available in current GLSL version Enabling the "draw_instanced" extension in GLSL has no effect: #extension GL_ARB_draw_instanced : enable #extension GL_EXT_draw_instanced : enable The error goes away when I specifically declare that I need ES 3.0 (GLSL 300 ES): #version 300 es Although that seem to work fine on my Windows desktop machine in an ES 2.0 context I doubt that this would work on an iPhone 5. So, shall I abandon the idea of being able to use instanced drawing on older iOS devices?

    Read the article

  • How do I read input character-by-character in Java?

    - by Jergason
    I am used to the c-style getchar(), but it seems like there is nothing comparable for java. I am building a lexical analyzer, and I need to read in the input character by character. I know I can use the scanner to scan in a token or line and parse through the token char-by-char, but that seems unwieldy for strings spanning multiple lines. Is there a way to just get the next character from the input buffer in Java, or should I just plug away with the Scanner class? Edit: forgot to say where the input is coming from. The input is a file, not the keyboard.

    Read the article

  • [Python] How can I speed up unpickling large objects if I have plenty of RAM?

    - by conradlee
    It's taking me up to an hour to read a 1-gigabyte NetworkX graph data structure using cPickle (its 1-GB when stored on disk as a binary pickle file). Note that the file quickly loads into memory. In other words, if I run: import cPickle as pickle f = open("bigNetworkXGraph.pickle","rb") binary_data = f.read() # This part doesn't take long graph = pickle.loads(binary_data) # This takes ages How can I speed this last operation up? Note that I have tried pickling the data both in using both binary protocols (1 and 2), and it doesn't seem to make much difference which protocol I use. Also note that although I am using the "loads" (meaning "load string") function above, it is loading binary data, not ascii-data. I have 128gb of RAM on the system I'm using, so I'm hoping that somebody will tell me how to increase some read buffer buried in the pickle implementation.

    Read the article

  • getnameinfo specifies socklen_t

    - by bobby
    The 2nd arg for the getnameinfo prototype asks for a socklen_t type but sizeof uses size_t. So how can I get socklen_t ? Prototype: int getnameinfo(const struct sockaddr *restrict sa, socklen_t salen, char *restrict node, socklen_t nodelen, char *restrict service, socklen_t servicelen, int flags); Example: struct sockaddr_in SIN; memset(&SIN, 0, sizeof(SIN)); // This should also be socklen_t ? SIN.sin_family = AF_INET; SIN.sin_addr.s_addr = inet_addr(IP); SIN.sin_port = 0; getnameinfo((struct sockaddr *)&SIN, sizeof(SIN) /* socklen_t */, BUFFER, NI_MAXHOST, NULL, 0, 0);

    Read the article

  • What is the fastest way to copy content of DVD to hard disc using Linux

    - by Ritesh
    I have gone through some of the links Which talks about fastest way of copying files in windows using FILE_FLAG_NO_BUFFERING and FILE_FLAG_OVERLAPPED . It also talks about how request made for read and write opeartions with BUFFER SIZE as 256KB and 128KB are faster than 1Mb .The link for that is :- Explanation for tiny reads (overlapped, buffered) outperforming large contiguous reads? I am also loking for a Similar method in linux Which allows me to copy the content of my DVD to Hard Disc in a fast Way . So I wanted to know Is there some file operation flags in Linux which would provide me the best result or Which way of Copy in Linux is the best ? My codes are all in c++.

    Read the article

  • clear html webpage?

    - by noname
    i am currently developing a crawler that crawls all links on the web and displays them in the web browser (and saving it of course). but after some hours there will be a huge list displayed on the web browser and i want to only display lets say 1000 links at the same time. then i clear the html and display another 1000 links. this is also good for the RAM or it will eat up all memory. how do i clear the web browser screen? EDIT: i have seen some scripts using some flush buffer functions. has this anything to do with my case?

    Read the article

  • How to name variables wich are structs

    - by evilpie
    Hello, i often work on private projects using the WinApi, and as you might know, it has thousands of named and typedefed structs like MEMORY_BASIC_INFORMATION. I will stick to this one in my question, what still is preferred, or better when you want to name a variable of this type. Is there some kind of style guide for this case? For example if i need that variable for the VirtualQueryEx function. Some ideas: MEMORY_BASIC_INFORMATION memoryBasicInformation; MEMORY_BASIC_INFORMATION memory_basic_information; Just use the name of the struct non captialized and with or without the underlines. MEMORY_BASIC_INFORMATION basicInformation; MEMORY_BASIC_INFORMATION information; Short form? MEMORY_BASIC_INFORMATION mbi; I often see this style, using the abbreviation of the struct name. MEMORY_BASIC_INFORMATION buffer; VirtualQueryEx defines the third parameter lpBuffer (where you pass the pointer to the struct), so using this name might be an idea, too. Cheers

    Read the article

  • How to define trees with more than one type in ML programing language

    - by user550413
    Well, I am asked to do the next thing: To define a binary tree which can contain 2 different types: ('a,'b) abtree and these are the requirements: Any inner vertex (not a leaf) must be of the type 'a or 'b and the leafs have no value. For every path in the tree all 'a values must appear before the 'b value: examples of paths: 'a->'a->'a-'b (legal) 'a->'b->'b (legal) 'a->'a->'a (legal) 'b->'b->'b (legal) 'a->'b->'a (ILLEGAL) and also I need to define another tree which is like the one described above but now I have got also 'c and in the second requirement it says that for every path I 'a values appear before the 'b values and all the 'b values appear before the 'c values. First, I am not sure how to define binary trees to have more than 1 type in them. I mean the simplest binary tree is: datatype 'a tree = leaf | br of 'a * 'a tree * 'a tree; And also how I can define a tree to have these requirements. Any help will be appreciated. Thanks.

    Read the article

  • How to play multiple online videos on IOS continuously

    - by Matt.Z
    The scenario is like this: I have some long video, and slice it into small files(mp4, for example: 5 min per file), put them under some website. I wanna play (on IOS) these mp4 videos continuously, one by one, try to do not let user feel there has a pause between video pieces. So I need to buffer next video when I play current one. But I don't know where to start. What should I do? Can anyone give me some information of related documentation or source code I can study with?

    Read the article

  • Open a file with su/sudo inside Emacs

    - by Chris Conway
    Suppose I want to open a file in an existing Emacs session using su or sudo, without dropping down to a shell and doing sudoedit or sudo emacs. One way to do this is (require 'tramp) C-c C-f /sudo::/path/to/file but this requires an expensive round-trip through SSH. Is there a more direct way? [EDIT] @JBB is right. I want to be able to invoke su/sudo to save as well as open. It would be OK (but not ideal) to re-authorize when saving. What I'm looking for is variations of find-file and save-buffer that can be "piped" through su/sudo.

    Read the article

  • How to handle failure to release a resource which is contained in a smart pointer?

    - by cj
    How should an error during resource deallocation be handled, when the object representing the resource is contained in a shared pointer? Smart pointers are a useful tool to manage resources safely. Examples of such resources are memory, disk files, database connections, or network connections. // open a connection to the local HTTP port boost::shared_ptr<Socket> socket = Socket::connect("localhost:80"); In a typical scenario, the class encapsulating the resource should be noncopyable and polymorphic. A good way to support this is to provide a factory method returning a shared pointer, and declare all constructors non-public. The shared pointers can now be copied from and assigned to freely. The object is automatically destroyed when no reference to it remains, and the destructor then releases the resource. /** A TCP/IP connection. */ class Socket { public: static boost::shared_ptr<Socket> connect(const std::string& address); virtual ~Socket(); protected: Socket(const std::string& address); private: // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; But there is a problem with this approach. The destructor must not throw, so a failure to release the resource will remain undetected. A common way out of this problem is to add a public method to release the resource. class Socket { public: virtual void close(); // may throw // ... }; Unfortunately, this approach introduces another problem: Our objects may now contain resources which have already been released. This complicates the implementation of the resource class. Even worse, it makes it possible for clients of the class to use it incorrectly. The following example may seem far-fetched, but it is a common pitfall in multi-threaded code. socket->close(); // ... size_t nread = socket->read(&buffer[0], buffer.size()); // wrong use! Either we ensure that the resource is not released before the object is destroyed, thereby losing any way to deal with a failed resource deallocation. Or we provide a way to release the resource explicitly during the object's lifetime, thereby making it possible to use the resource class incorrectly. There is a way out of this dilemma. But the solution involves using a modified shared pointer class. These modifications are likely to be controversial. Typical shared pointer implementations, such as boost::shared_ptr, require that no exception be thrown when their object's destructor is called. Generally, no destructor should ever throw, so this is a reasonable requirement. These implementations also allow a custom deleter function to be specified, which is called in lieu of the destructor when no reference to the object remains. The no-throw requirement is extended to this custom deleter function. The rationale for this requirement is clear: The shared pointer's destructor must not throw. If the deleter function does not throw, nor will the shared pointer's destructor. However, the same holds for other member functions of the shared pointer which lead to resource deallocation, e.g. reset(): If resource deallocation fails, no exception can be thrown. The solution proposed here is to allow custom deleter functions to throw. This means that the modified shared pointer's destructor must catch exceptions thrown by the deleter function. On the other hand, member functions other than the destructor, e.g. reset(), shall not catch exceptions of the deleter function (and their implementation becomes somewhat more complicated). Here is the original example, using a throwing deleter function: /** A TCP/IP connection. */ class Socket { public: static SharedPtr<Socket> connect(const std::string& address); protected: Socket(const std::string& address); virtual Socket() { } private: struct Deleter; // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; struct Socket::Deleter { void operator()(Socket* socket) { // Close the connection. If an error occurs, delete the socket // and throw an exception. delete socket; } }; SharedPtr<Socket> Socket::connect(const std::string& address) { return SharedPtr<Socket>(new Socket(address), Deleter()); } We can now use reset() to free the resource explicitly. If there is still a reference to the resource in another thread or another part of the program, calling reset() will only decrement the reference count. If this is the last reference to the resource, the resource is released. If resource deallocation fails, an exception is thrown. SharedPtr<Socket> socket = Socket::connect("localhost:80"); // ... socket.reset();

    Read the article

  • Improve performance writing 10 million records to text file using windows service

    - by user1039583
    I'm fetching more than 10 millions of records from database and writing to a text file. It takes hours of time to complete this operation. Is there any option to use TPL features here? It would be great if someone could get me started implementing this with the TPL. using (FileStream fStream = new FileStream("d:\\file.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite)) { BufferedStream bStream = new BufferedStream(fStream); TextWriter writer = new StreamWriter(bStream); for (int i = 0; i < 100000000; i++) { writer.WriteLine(i); } bStream.Flush(); writer.Flush(); // empty buffer; fStream.Flush(); }

    Read the article

  • .NET SerialPort.Read skipps bytes

    - by Lukas Rieger
    Solution Reading the data byte wise via "port.ReadByte" is too slow, the problem is inside the SerialPort class. i changed it to reading bigger chunks via "port.Read" and there are now no buffer overruns. although i found the solution myself, writing it down helped me and maybe someone else has the same problem and finds this via google... (how can i mark it as answered?) EDIT 2 by setting port.ReadBufferSize = 2000000; i can delay the problem for ~30 seconds. so it seems, .Net really is too slow... since my application is not that critical, i just set the buffer to 20MB, but i am still interested in the cause. EDIT i just tested something i had not thought of before (shame on me): port.ErrorReceived += (object self, SerialErrorReceivedEventArgs se_arg) => { Console.Write("| Error: {0} | ", System.Enum.GetName(se_arg.EventType.GetType(), se_arg.EventType)); }; and it seems that i have an overrun. Is the .Net implementation too slow for 500k or is there an error on my side? Original Question i built a very primitive oszilloscope (avr, which sends adc data over uart to an ftdi chip). On the pc side i have a WPF Programm that displays this data. The Protokoll is: two sync bytes (0xaffe) - 14 data bytes - two sync bytes - 14 data bytes - ... i use 16bit values, so inside the 14 data bytes are 7 channels (lsb first). I verified the uC Firmware with hTerm, and it does send and receive everything correct. But, if i try to read the data with C#, sometimes some bytes are lost. The oszilloscop programm is a mess, but i created a small sample application, which has the same symptoms. I added two extension methods to a) read one byte from the COM Port and ignore -1 (EOF) and b) wait for the sync pattern. The sample programm first syncs onto the data stream by waiting for (0xaffe) and then compares the received bytes with the expected values. the loop runs a few times until an assert failed message pops up. I could not find anything about lost bytes via google, any help would be appreciated. Code using System; using System.Collections.Generic; using System.Diagnostics; using System.IO.Ports; using System.Linq; using System.Text; using System.Threading.Tasks; namespace SerialTest { public static class SerialPortExtensions { public static byte ReadByteSerial(this SerialPort port) { int i = 0; do { i = port.ReadByte(); } while (i < 0 || i > 0xff); return (byte)i; } public static void WaitForPattern_Ushort(this SerialPort port, ushort pattern) { byte hi = 0; byte lo = 0; do { lo = hi; hi = port.ReadByteSerial(); } while (!(hi == (pattern >> 8) && lo == (pattern & 0x00ff))); } } class Program { static void Main(string[] args) { //500000 8n1 SerialPort port = new SerialPort("COM3", 500000, Parity.None, 8, StopBits.One); port.Open(); port.DiscardInBuffer(); port.DiscardOutBuffer(); //Sync port.WaitForPattern_Ushort(0xaffe); byte hi = 0; byte lo = 0; int val; int n = 0; // Start Loop, the stream is already synced while (true) { //Read 7 16-bit values (=14 Bytes) for (int i = 0; i < 7; i++) { lo = port.ReadByteSerial(); hi = port.ReadByteSerial(); val = ((hi << 8) | lo); Debug.Assert(val != 0xaffe); } //Read two sync bytes lo = port.ReadByteSerial(); hi = port.ReadByteSerial(); val = ((hi << 8) | lo); Debug.Assert(val == 0xaffe); n++; } } } }

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >