Search Results

Search found 3754 results on 151 pages for 'vertex buffer'.

Page 117/151 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • How to setup/calculate texturebuffer in glTexCoordPointer when importing from OBJ-file

    - by JohnMurdoch
    Hi all, I'm parsing an OBJ-file in Android and my goal is to render & display the object. Everything works fine except the correct texture mapping (importing the resource/image into opengl etc works fine). I don't know how to populate the texture related data from the obj-file into an texturebuffer-object. In the OBJ-file I've vt-lines: vt 0.495011 0.389417 vt 0.500686 0.561346 and face-lines: f 127/73/62 98/72/62 125/75/62 My draw-routine looks like (only relevant parts): gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_NORMAL_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); gl.glNormalPointer(GL10.GL_FLOAT, 0, normalsBuffer); gl.glTexCoordPointer(2, GL10.GL_SHORT, 0, t.getvtBuffer()); gl.glDrawElements(GL10.GL_TRIANGLES, t.getFacesCount(), GL10.GL_UNSIGNED_SHORT, t.getFaceBuffer()); Output of the counts of the OBJ-file: Vertex-count: 1023 Vns-count: 1752 Vts-count: 524 ///////////////////////// Part 0 Material name:default Number of faces:2037 Number of vnPointers:2037 Number of vtPointers:2037 Any advise is welcome.

    Read the article

  • Maximum Possible File Name Length in Windows Kernel

    - by Lambert
    I was wondering, what is the longest possible name length allowed by the Windows kernel? E.g.: I know the kernel uses UNICODE_STRING structures to hold all object paths, and since the byte length of a wide-character string is stored inside a USHORT, that allows for a maximum path length of 2^15 - 1 characters. Is there a similar, hard restriction on a file name (rather than path)? (I don't care if NTFS or FAT32 imposes a particular restriction; I'm looking for the longest possible theoretically allowed name in the kernel, assuming no additional file system or shell restrictions.) (Edit: For those wondering why this even matters, consider that normally, traversing a directory is achieved by FindFirstFile/FindNextFile calls, one call per file. Given the function named NtQueryDirectoryFile, which is the underlying system call and which returns multiple file names per call, it's actually possible to take advantage of this maximum-length restriction on the path to make an extremely-fast directory traverser that uses solely the stack as a buffer. Now I'm trying to extend that concept, and I need to know the maximum size of a file name.)

    Read the article

  • Upgraded activerecord-sqlserver-adapter from 2.2.22 to 2.3.8 and now getting an ODBC error

    - by stuartc
    I have been using MSSQL 2005 with Rails for quite a while now, and decided to bump my gems up on one of my projects and ran into a problem. I moved from 2.2.22 to 2.3.8 (latest as of writing) and all of a sudden I got this: ODBC::Error: S1090 (0) [unixODBC][Driver Manager]Invalid string or buffer length I'm using a DSN connection with FreeTDS my database.yml looks like this: adapter: sqlserver mode: ODBC dsn: 'DRIVER=FreeTDS;TDSVER=7.0;SERVER=10.0.0.5;DATABASE=db;Port=1433;UID=user;PWD=pwd;' Now in the mean time I moved back to 2.2.22 and there are no deprecation warnings and everything seems fine but obviously for the sake of being up to date, any ideas what could have changed in the adaptor that could cause this?

    Read the article

  • Can a client determine whether the server has accept()'d a unix socket?

    - by Havoc P
    I'm dealing with a buggy server that will sometimes fail to accept() connections (but leaves its listening socket open). This is on Linux with unix domain sockets. Currently the only way to detect this is that after sending a bunch of data, the buffer fills up and blocks, and the server isn't sending any replies. This long-after-the-fact failure mode is hard to distinguish from other bugs - the server could be unresponsive for other reasons. Especially for unix domain sockets it seems the kernel should know whether accept() has occurred; is there any way to find this out? Can the client block until accept() happens somehow, or at least check whether it has? This is just for debugging purposes so it can be a little ugly.

    Read the article

  • Using slime's C-x C-e (Eval the form under the point) with swank-clojure in emacs

    - by hiheelhottie
    Hi, I'm using swank-clojure in emacs on OSX. I'm able to run a slime session. When I use C-x C-e on a simple form in a .clj file like (+ 7 7) I get an sldb buffer with Unable to resolve symbol: + in this context [Thrown class java.lang.Exception] I'm able to evaluate that form in the slime session directly. I was hoping the form in the clj file would get evaluated in the running slime session. Can someone explain how C-x C-e works in swank-clojure and how I can get the form to be evaluated in the running slime session? Thanks, hhh

    Read the article

  • How do I read input character-by-character in Java?

    - by Jergason
    I am used to the c-style getchar(), but it seems like there is nothing comparable for java. I am building a lexical analyzer, and I need to read in the input character by character. I know I can use the scanner to scan in a token or line and parse through the token char-by-char, but that seems unwieldy for strings spanning multiple lines. Is there a way to just get the next character from the input buffer in Java, or should I just plug away with the Scanner class? Edit: forgot to say where the input is coming from. The input is a file, not the keyboard.

    Read the article

  • how can read data in image uri

    - by satyamurthy
    hi sir i am implementing image upload then i got image uri how can read data in image uri File Img = new File(selectedImage.getPath()+inFileType); System.out.println("2............."+Img); FileInputStream is = null; try { is = new FileInputStream(Img); is.read(buffer); BufferedInputStream bis = new BufferedInputStream(is); Bitmap bm = BitmapFactory.decodeStream(is); bis.close(); is.close(); this code implementing i got uri how can read data

    Read the article

  • Instanced drawing with OpenGL ES 2.0

    - by Mårten Wikström
    In short: Is it possible to use the gl_InstanceID built-in variable in OpenGL ES 2.0? And, if so, how? Some more info: I want to draw multiple instances of an object using glDrawArraysInstanced and gl_InstanceID, and I want my application to run on multiple platforms, including iOS. The specification clearly says that these features require ES 3.0. According to the iOS Device Compatibility Reference ES 3.0 is only available on a few devices (those based on the A7 GPU; so iPhone 5s, but not on iPhone 5 or earlier). So my first assumption was that I needed to avoid using instanced drawing on older iOS devices. However, further down in the compatibility reference document it says that the EXT_draw_instanced extension is supported for all SGX Series 5 processors (that includes iPhone 5 and 4s). This makes me think that I could indeed use instanced drawing on older iOS devices too, by looking up and using the appropriate extension function (EXT or ARB) for glDrawArraysInstanced. I'm currently just running some test code using SDL and GLEW on Windows so I haven't tested anything on iOS yet. However, in my current setup I'm having trouble using the gl_InstanceID built-in variable in a vertex shader. I'm getting the following error message: 'gl_InstanceID' : variable is not available in current GLSL version Enabling the "draw_instanced" extension in GLSL has no effect: #extension GL_ARB_draw_instanced : enable #extension GL_EXT_draw_instanced : enable The error goes away when I specifically declare that I need ES 3.0 (GLSL 300 ES): #version 300 es Although that seem to work fine on my Windows desktop machine in an ES 2.0 context I doubt that this would work on an iPhone 5. So, shall I abandon the idea of being able to use instanced drawing on older iOS devices?

    Read the article

  • getnameinfo specifies socklen_t

    - by bobby
    The 2nd arg for the getnameinfo prototype asks for a socklen_t type but sizeof uses size_t. So how can I get socklen_t ? Prototype: int getnameinfo(const struct sockaddr *restrict sa, socklen_t salen, char *restrict node, socklen_t nodelen, char *restrict service, socklen_t servicelen, int flags); Example: struct sockaddr_in SIN; memset(&SIN, 0, sizeof(SIN)); // This should also be socklen_t ? SIN.sin_family = AF_INET; SIN.sin_addr.s_addr = inet_addr(IP); SIN.sin_port = 0; getnameinfo((struct sockaddr *)&SIN, sizeof(SIN) /* socklen_t */, BUFFER, NI_MAXHOST, NULL, 0, 0);

    Read the article

  • Django gives "I/O operation on closed file" error when reading from a saved ImageField

    - by Rob Osborne
    I have a model with two image fields, a source image and a thumbnail. When I update the new source image, save it and then try to read the source image to crop/scale it to a thumbnail I get an "I/O operation on closed file" error from PIL. If I update the source image, don't save the source image, and then try to read the source image to crop/scale, I get an "attempting to read from closed file" error from PIL. In both cases the source image is actually saved and available in later request/response loops. If I don't crop/scale in a single request/response loop but instead upload on one page and then crop/scale in another page this all works fine. This seems to be a cached buffer being reused some how, either by PIL or by the Django file storage. Any ideas on how to make an ImageField readable after saving?

    Read the article

  • C++ code beautifier for emacs/linux

    - by aaa
    hi I am looking for code beautifier for UNIX/emacs. I have looked at gnu indent, artistic style, however I need something a bit different. For example, I would like the following: for( int x= 0;; ++ x) if(x) break; to be formatted as for (int x = 0; ; ++x) if (x) break;. As far as I can tell artistic style does not do that (correct me if I am wrong). What can you recommend? Thanks edit both, artistic style and indent remove whitespace. Here is a small interactive command to beautify region: 405 (defun my-emacs-command-beautify-region() 406 (interactive) 407 (let ((cmd "astyle")) 408 (shell-command-on-region (region-beginning) (region-end) cmd (current-buffer) t))

    Read the article

  • jQuery Audio Player

    - by tony noriega
    I was given 2 MP3 files, one that is 4.5Mb and one that is 5.6Mb. I was instructed to have them play on a website i am managing. I have found a nice, clean looking CSS based jQuery audio player. My question is, is this the right solution for files that big? I am not sure if the player preloads the file, or streams it ? (if that is the correct terminology) i dont deal much with audio players and such... this player is from happyworm.com/jquery/jplayer/latest/demo-01.htm is there another approach i shoudl take to get this to play properly? I dont want it to have to buffer, and the visitor to wait, or slow page loading...etc..etc.. i want it to play clean and not affect the visitors session to the site. thanks

    Read the article

  • Writing my own iostream utility class: Is this a good idea?

    - by Alex
    I have an application that wants to read word by word, delimited by whitespace, from a file. I am using code along these lines: std::istream in; string word; while (in.good()) { in>>word; // Processing, etc. ... } My issue is that the processing on the words themselves is actually rather light. The major time consumer is a set of mySQL queries I run. What I was thinking is writing a buffered class that reads something like a kilobyte from the file, initializes a stringstream as a buffer, and performs extraction from that transparently to avoid a great many IO operations. Thoughts and advice?

    Read the article

  • How to reliably measure available memory in Linux?

    - by Alex B
    Linux /proc/meminfo shows a number of memory usage statistics. MemTotal: 4040732 kB MemFree: 23160 kB Buffers: 163340 kB Cached: 3707080 kB SwapCached: 0 kB Active: 1129324 kB Inactive: 2762912 kB There is quite a bit of overlap between them. For example, as far as I understand, there can be active page cache (belongs to "cached" and "active") and inactive page cache ("inactive" + "cached"). What I want to do is to measure "free" memory, but in a way that it includes used pages that are likely to be dropped without a significant impact on overall system's performance. At first, I was inclined to use "free" + "inactive", but Linux's "free" utility uses "free" + "cached" in its "buffer-adjusted" display, so I am curious what a better approach is. When the kernel runs out of memory, what is the priority of pages to drop and what is the more appropriate metric to measure available memory?

    Read the article

  • producing a typewriter-like effect

    - by Tony Ennis
    Android newb here. Please use small words :-) I'd like to simulate typewriter output on my Android. The output being displayed is generated by a game and is somewhat freeform. The effect I want to see individual characters appear at a rate of about 6 characters a second. When a 'carriage return' is seen, I'd like to insert a delay then resume typing on the left. What are some suggestions on views? Would the view of choice for this be a TextView? Even that seems like overkill for this read-only coarsely scrolling output. I saw something on this thread about an AsyncTask. That looks useful. Perhaps my game will write to some manner of buffer, and a subclass of AsyncTask will pull characters out every .15 seconds or so, add them to the TextView, then invalidate() the TextView? Sound like a plan?

    Read the article

  • Python IPC, popen too slow

    - by UnableToLoad
    i need to run a subprocess (./myProgram) form python script and get output, actually i do this: import subprocess proc = subprocess.Popen('./generate_out', shell=False, stdout=subprocess.PIPE, ) while proc.poll() is None: out = proc.stdout.readline() data = doStuff(out) print(data) but is slow, sometimes pass a lot of time between the output produced by ./generate_out and the print(data), knowing that my doStuff() function is very fast, i think there is some buffer slowing down my pipe... Notes: ./generate_out, generates potentially an unlimited number of lines of finite length each. It seems that when too few chars are put in the pipe between the two processes nothing happens, then when enough is produced i get a huge print (non the expected behaviour!) sometimes i wait many seconds (10-20 and more) between generate_out print and python print) what can i do? maybe communicate() is faster? anithing else? Thank you a lot!

    Read the article

  • clear html webpage?

    - by noname
    i am currently developing a crawler that crawls all links on the web and displays them in the web browser (and saving it of course). but after some hours there will be a huge list displayed on the web browser and i want to only display lets say 1000 links at the same time. then i clear the html and display another 1000 links. this is also good for the RAM or it will eat up all memory. how do i clear the web browser screen? EDIT: i have seen some scripts using some flush buffer functions. has this anything to do with my case?

    Read the article

  • nxhtml and geben: debug mode stops responding to keystrokes upon step into html/php mixed line

    - by artistoex
    I'm using the php debugger geben and nxhtml-mode. While debugging, as soon as I step into a mixed line such as <foo><?php bar(); ?></foo> the debugger is no longer accepting any key-strokes. However, the mode line still indicates the debugger's presence (*debugging*'-entry). I guess this due to nxhtml's mode changes, because it's the exact same behavior geben shows after disabling end re-enabling it. Does anybody use nxhtml together with geben and has fixed it? Or is it possible to configure emacs to enable nxhtml conditionaly, such that php-mode is used instead when the buffer was opened by geben?

    Read the article

  • SSIS process files from folder

    - by RT
    Background: I've a folder that gets pumped with files continuously. My SSIS package needs to process the files and delete them. The SSIS package is scheduled to run once every minute. I'm picking up the files in ascending order of file creation time. I'm building an array of files and then processing-deleting them one at a time. Problem: If an instance of my package takes longer than one minute to run, the next instance of the SSIS package will pick up some of the files the previous instance has in its buffer. By the time the second instance of teh package gets around to processing a file, it may already have been deleted by the first instance, creating an exception condition. I was wondering whether there was a way to avoid the exception condition. Thanks.

    Read the article

  • [Python] How can I speed up unpickling large objects if I have plenty of RAM?

    - by conradlee
    It's taking me up to an hour to read a 1-gigabyte NetworkX graph data structure using cPickle (its 1-GB when stored on disk as a binary pickle file). Note that the file quickly loads into memory. In other words, if I run: import cPickle as pickle f = open("bigNetworkXGraph.pickle","rb") binary_data = f.read() # This part doesn't take long graph = pickle.loads(binary_data) # This takes ages How can I speed this last operation up? Note that I have tried pickling the data both in using both binary protocols (1 and 2), and it doesn't seem to make much difference which protocol I use. Also note that although I am using the "loads" (meaning "load string") function above, it is loading binary data, not ascii-data. I have 128gb of RAM on the system I'm using, so I'm hoping that somebody will tell me how to increase some read buffer buried in the pickle implementation.

    Read the article

  • c++ library for endian-aware reading of raw file stream metadata?

    - by Kache4
    I've got raw data streams from image files, like: vector<char> rawData(fileSize); ifstream inFile("image.jpg"); inFile.read(&rawData[0]); I want to parse the headers of different image formats for height and width. Is there a portable library that can can read ints, longs, shorts, etc. from the buffer/stream, converting for endianess as specified? I'd like to be able to do something like: short x = rawData.readLeShort(offset); or long y = rawData.readBeLong(offset) An even better option would be a lightweight & portable image metadata library (without the extra weight of an image manipulation library) that can work on raw image data. I've found that Exif libraries out there don't support png and gif.

    Read the article

  • "Streaming" MJPG using python.

    - by tyler
    I have a webcam that I want to do some image processing on using Python. It's coming through as a Motion-JPEG. I want to try to process the stuff "live," but really what I want to do is this: Open the URL, start data streaming to some buffer... Read x bytes (where x is image size) to an image Process that image Display in result panel Return to number 2 The problem is that, while I do have the resolution, I have no idea how many bytes to read. I've tried googling the M-JPEG specification but can't find anything on if the images are separated by some header or what. Anybody have any ideas?

    Read the article

  • Emacs: Define a function which loads the file where the function itself is defined

    - by damd
    I'm refactoring a bit in my Emacs set up and have come to the conclusion that I want to use a different init file than the default one. So basically, in my ~/.emacs file, I have this: (load "/some/directory/init.el") Up until now, that's been working just fine. However, now I want to redefine an old command that I've used for ages, which opens my init file: (defun conf () "Open a buffer with the user init file." (interactive) (find-file user-init-file)) As you can see, this will open ~/.emacs no matter what I do. I want it to open /some/directory/init.el, or wherever the conf command itself is defined. How would I do that?

    Read the article

  • .NET SerialPort.Read skipps bytes

    - by Lukas Rieger
    Solution Reading the data byte wise via "port.ReadByte" is too slow, the problem is inside the SerialPort class. i changed it to reading bigger chunks via "port.Read" and there are now no buffer overruns. although i found the solution myself, writing it down helped me and maybe someone else has the same problem and finds this via google... (how can i mark it as answered?) EDIT 2 by setting port.ReadBufferSize = 2000000; i can delay the problem for ~30 seconds. so it seems, .Net really is too slow... since my application is not that critical, i just set the buffer to 20MB, but i am still interested in the cause. EDIT i just tested something i had not thought of before (shame on me): port.ErrorReceived += (object self, SerialErrorReceivedEventArgs se_arg) => { Console.Write("| Error: {0} | ", System.Enum.GetName(se_arg.EventType.GetType(), se_arg.EventType)); }; and it seems that i have an overrun. Is the .Net implementation too slow for 500k or is there an error on my side? Original Question i built a very primitive oszilloscope (avr, which sends adc data over uart to an ftdi chip). On the pc side i have a WPF Programm that displays this data. The Protokoll is: two sync bytes (0xaffe) - 14 data bytes - two sync bytes - 14 data bytes - ... i use 16bit values, so inside the 14 data bytes are 7 channels (lsb first). I verified the uC Firmware with hTerm, and it does send and receive everything correct. But, if i try to read the data with C#, sometimes some bytes are lost. The oszilloscop programm is a mess, but i created a small sample application, which has the same symptoms. I added two extension methods to a) read one byte from the COM Port and ignore -1 (EOF) and b) wait for the sync pattern. The sample programm first syncs onto the data stream by waiting for (0xaffe) and then compares the received bytes with the expected values. the loop runs a few times until an assert failed message pops up. I could not find anything about lost bytes via google, any help would be appreciated. Code using System; using System.Collections.Generic; using System.Diagnostics; using System.IO.Ports; using System.Linq; using System.Text; using System.Threading.Tasks; namespace SerialTest { public static class SerialPortExtensions { public static byte ReadByteSerial(this SerialPort port) { int i = 0; do { i = port.ReadByte(); } while (i < 0 || i > 0xff); return (byte)i; } public static void WaitForPattern_Ushort(this SerialPort port, ushort pattern) { byte hi = 0; byte lo = 0; do { lo = hi; hi = port.ReadByteSerial(); } while (!(hi == (pattern >> 8) && lo == (pattern & 0x00ff))); } } class Program { static void Main(string[] args) { //500000 8n1 SerialPort port = new SerialPort("COM3", 500000, Parity.None, 8, StopBits.One); port.Open(); port.DiscardInBuffer(); port.DiscardOutBuffer(); //Sync port.WaitForPattern_Ushort(0xaffe); byte hi = 0; byte lo = 0; int val; int n = 0; // Start Loop, the stream is already synced while (true) { //Read 7 16-bit values (=14 Bytes) for (int i = 0; i < 7; i++) { lo = port.ReadByteSerial(); hi = port.ReadByteSerial(); val = ((hi << 8) | lo); Debug.Assert(val != 0xaffe); } //Read two sync bytes lo = port.ReadByteSerial(); hi = port.ReadByteSerial(); val = ((hi << 8) | lo); Debug.Assert(val == 0xaffe); n++; } } } }

    Read the article

  • How to play multiple online videos on IOS continuously

    - by Matt.Z
    The scenario is like this: I have some long video, and slice it into small files(mp4, for example: 5 min per file), put them under some website. I wanna play (on IOS) these mp4 videos continuously, one by one, try to do not let user feel there has a pause between video pieces. So I need to buffer next video when I play current one. But I don't know where to start. What should I do? Can anyone give me some information of related documentation or source code I can study with?

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >