Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 171/238 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Is there any way to "peek" at a file while it's uploading through HTTP onto a Windows box?

    - by iisystems
    I need to add a file upload function to an ASP.NET website and would like to be able to read a small portion of the file on the server while it's still uploading. A peek or preview type function so I can determine contents and give some feedback to the user while it is still uploading (we're talking about large files here). Is there any way to do this? I'm thinking worst case of writing a custom control which uploads only a fixed number of bytes of the file once chosen and then under the covers starts another upload of the full file. Not totally sure even this is possible, but I'm looking for a more elegant solution anyway... Thanks!

    Read the article

  • What does 'unsigned temp:3' means

    - by Munir Ahmed
    Hi, I'm trying to map a C structure to Java using JNA. I came across something that I've never seen. The struct definition is as follow, struct op { unsigned op_type:9; //---> what does this means? unsigned op_opt:1; unsigned op_latefree:1; unsigned op_latefreed:1; unsigned op_attached:1; unsigned op_spare:3; U8 op_flags; U8 op_private; }; you can see some variable being defined like unsigned op_attached:1 and I'm unsure what would that mean: would that effect number of bytes to be allocated for this particular variable? any help? Thanks, Munir

    Read the article

  • how to read the txt file from database(byte[] to filestream)

    - by Ranjana
    i have stored the txt file to sql server database . i need to read the txt file line by line to get the content in it. my code : DataTable dtDeleteFolderFile = new DataTable(); dtDeleteFolderFile = objutility.GetData("GetTxtFileonFileName", new object[] { ddlSelectFile.SelectedItem.Text }).Tables[0]; foreach (DataRow dr in dtDeleteFolderFile.Rows) { name = dr["FileName"].ToString(); records = Convert.ToInt32(dr["NoOfRecords"].ToString()); bytes = (Byte[])dr["Data"]; } FileStream readfile = new FileStream(Server.MapPath("txtfiles/" + name), FileMode.Open); StreamReader streamreader = new StreamReader(readfile); string line = ""; line = streamreader.ReadLine(); but here i have used the FileStream to read from the Particular path. but i have saved the txt file in byt format into my Database. how to read the txt file using the byte[] value to get the txt file content, instead of using the Path value.

    Read the article

  • Bind texture with pinned mapped memory in CUDA

    - by sjchoi
    I was trying to bind a host memory that was mapped for zero-copy to a texture, but it looks like it isn't possible. Here is a code sample: float* a; float* d_a; cudaSetDeviceFlags(cudaDeviceMapHost); cudaHostAlloc( (void **)&a, bytes, cudaHostAllocMapped); cudaHostGetDevicePointer((void **)&d_a, (void *)a, 0); texture<float, 2, cudaReadModeElementType> tex; cudaBindTexture2D( 0, &tex, d_a, &channelDesc, width, height, pitch); Is it recommended that you used pinned memory and just copy it over to device memory that is bind to texture?

    Read the article

  • About data size filled in the buffer

    - by Bohan Lu
    I need low-latency audio in my project, and I know Android 2.3 supports OpenSL ES. I have read documents and sample code and I decide to use Android simple buffer queue to do the play and record. I now try to write a simple application to do the test. However, I have some questions about recording. If I set the recorder stop when it is recording, how do I know the exact number of bytes filled in the last buffer if it is not filled up ? In 1.1 version, the callback function has some parameters about buffer and its filled data, but there is no such parameters in version 1.0.1. Is there any way to get this information ? Any suggestion would be greatly appreciated !

    Read the article

  • Are indivisible operations still indivisible on multiprocessor and multicore systems?

    - by Steve314
    As per the title, plus what are the limitations and gotchas. For example, on x86 processors, alignment for most data types is optional - an optimisation rather than a requirement. That means that a pointer may be stored at an unaligned address, which in turn means that pointer might be split over a cache page boundary. Obviously this could be done if you work hard enough on any processor (picking out particular bytes etc), but not in a way where you'd still expect the write operation to be indivisible. I seriously doubt that a multicore processor can ensure that other cores can guarantee a consistent all-before or all-after view of a written pointer in this unaligned-write-crossing-a-page-boundary situation. Am I right? And are there any similar gotchas I haven't thought of?

    Read the article

  • Error appearing in application after updating cakePHP library files from 1.3.0 to 1.3.1

    - by Gaurav Sharma
    Hi everyone, I have just updated my cakephp library to latest version 1.3.1. Before this I was running v1.3.0 with no errors. After running the application I am given this error message. unserialize() [function.unserialize]: Error at offset 0 of 2574 bytes [CORE\cake\libs\cache\file.php, line 176] I updated the libraries simply by replacing the existing cake files with the new ones downloaded from the net. Is it the correct way of updating applications. I did'nt made any customizations to the core library of cakePHP. What is the problem ? Please help. Thanks

    Read the article

  • Receving multiple multicast feeds on the same port - C, Linux

    - by Gigi
    I have an application that is receiving data from multiple multicast sources on the same port. I am able to receive the data. However, I am trying to account for statistics of each group (i.e. msgs received, bytes received) and all the data is getting mixed up. Does anyone know how to solved this problem? If I try to look at the sender's address, it is not the multicast address, but rather the IP of the sending machine. I am using the following socket options: struct ip_mreq mreq; mreq.imr_multiaddr.s_addr = inet_addr("224.1.2.3"); mreq.imr_interface.s_addr = INADDR_ANY; setsockopt(s, IPPROTO_IP, IP_ADD_MEMBERSHIP, &mreq, sizeof(mreq)); and also: setsockopt(s, SOL_SOCKET, SO_REUSEPORT, &reuse, sizeof(reuse)); I appreciate any help!!!

    Read the article

  • remove non-UTF-8 characters from xml with declared encoding=utf-8 - Java

    - by St Nietzke
    I have to handle this scenario in Java: I'm getting a request in XML form from a client with declared encoding=utf-8. Unfortunately it may contain not utf-8 characters and there is a requirement to remove these characters from the xml on my side (legacy). Let's consider an example where this invalid XML contains £ (pound). 1) I get xml as java String with £ in it (I don't have access to interface right now, but I probably get xml as a java String). Can I use replaceAll(£, "") to get rid of this character? Any potential issues? 2) I get xml as an array of bytes - how to handle this operation safely in that case?

    Read the article

  • SDCC and malloc() - allocating much less memory than is available

    - by Duncan Bayne
    When I run compile this code with SDCC 3.1.0, and run it on an Amstrad CPC 464 (under emulation, with WinCPC 0.9.26 running on Wine): void _test_malloc() { long idx = 0; while (1) { if (malloc(5)) { printf("%ld\r\n", ++idx); } else { printf("done"); break; } } } ... it consistently taps out at 92 malloc()s. I make that 460 bytes, which leads me to a couple of questions: What is malloc() doing on this system? I was sort of hoping for an order of magnitude more storage even on a 64kB system The behaviour is consistent on 64kB systems and 128kB systems; do I have to perform some sort of magic to access the additional memory, like manual bank switching?

    Read the article

  • How to do "See Also" to a book using doxygen

    - by Paul J. Lucas
    The Javadoc @see allows a simple string as an argument to refer to something like a book, e.g.: @see "The Java Programming Language." As far as I can tell, the Doxygen \see offers no equivalent. Is there any way to have a book reference generated in the documentation, e.g.: See Also The C++ Programming Language, Bjarne Stroustrup, Addison-Wesley, 2000, section 19.4.1: The Standard Allocator ? Clarification This question is about how to do a "See Also" as part of a comment, e.g.: /** * Allocates memory in an amazing way. * \param size The number of bytes to allocate. * \return Returns a pointer to the start of the allocated memory. * \see MyOtherClass::alloc() * \see "The C++ Programming Language," Bjarne Stroustrup, Addison-Wesley, 2000, * section 19.4.1: The Standard Allocator. */ void* my_alloc( size_t size ); Of course the above does not work in Doxygen. Note that if there are multiple \see tags, they should be merged into a single "See Also" section (like the way \see normally works.

    Read the article

  • What's the best way to convert a .eps (CMYK) to a .jpg (RGB) with Image Magick

    - by Slinky
    Hi All, I have a bunch of .eps files (CMYK) that I need to convert to .jpg (RGB) files. The following command sometimes gives me under or over saturated .jpg images, when compared to the source EPS file: $cmd = "convert -density 300 -quality 100% -colorspace RGB ".$epsURL." -flatten -strip ".$convertedURL; Is there a smarter way to do this such that the converted image will have the same qualities as the source EPS file? Here is an example of the source file info: Image: rejm.eps Format: PS (PostScript) Class: DirectClass Geometry: 537x471 Base geometry: 1074x941 Type: ColorSeparation Endianess: Undefined Colorspace: CMYK Channel depth: Cyan: 8-bit Magenta: 8-bit Yellow: 8-bit Black: 8-bit Channel statistics: Cyan: Min: 0 (0) Max: 255 (1) Mean: 161.913 (0.634955) Standard deviation: 72.8257 (0.285591) Magenta: Min: 0 (0) Max: 255 (1) Mean: 184.261 (0.722591) Standard deviation: 75.7933 (0.297229) Yellow: Min: 0 (0) Max: 255 (1) Mean: 70.6607 (0.277101) Standard deviation: 39.8677 (0.156344) Black: Min: 0 (0) Max: 195 (0.764706) Mean: 34.4382 (0.135052) Standard deviation: 38.1863 (0.14975) Total ink density: 292% Colors: 210489 Rendering intent: Undefined Resolution: 28.35x28.35 Units: PixelsPerCentimeter Filesize: 997.727kb Interlace: None Background color: white Border color: #DFDFDFDFDFDF Matte color: grey74 Page geometry: 537x471+0+0 Dispose: Undefined Iterations: 0 Compression: Undefined Orientation: Undefined Signature: 8ea00688cb5ae496812125e8a5aea40b0f0e69c9b49b2dc4eb028b22f76f2964 Profile-iptc: 19738 bytes Thanks

    Read the article

  • Why does DataInputStream not support integers?

    - by Jason
    I need to read in a list of numbers from a file, none of which are larger than 32767. Originally I was going to use the Scanner class to pull in the data, then I read about DataInputStream. This would work well for me, except that according to the API, it supports all primitive variables EXCEPT ints! Listed are longs, shorts, bytes, chars, booleans, ect, but no ints. I have no need for double precision from the incoming data. Is this a deliberate or unintentional oversight?

    Read the article

  • Opening Large (24 GB) File In C

    - by zacaj
    I'm trying to read in a 24 GB XML file in C, but it won't work. I'm printing out the current position using ftell() as I read it in, but once it gets to a big enough number, it goes back to a small number and starts over, never even getting 20% through the file. I assume this is a problem with the range of the variable that's used to store the position (long), which can go up to about 4,000,000,000 according to http://msdn.microsoft.com/en-us/library/s3f49ktz%28VS.80%29.aspx, while my file is 25,000,000,000 bytes in size. A long long should work, but how would I change what my compiler(Cygwin/mingw32) uses or get it to have fopen64?

    Read the article

  • About enumerations in Delphi and c++ in 64-bit environments

    - by sum1stolemyname
    I recently had to work around the different default sizes used for enumerations in Delphi and c++ since i have to use a c++ dll from a delphi application. On function call returns an array of structs (or records in delphi), the first element of which is an enum. To make this work, I use packed records (or aligned(1)-structs). However, since delphi selects the size of an enum-variable dynamically by default and uses the smallest datatype possible (it was a byte in my case), but C++ uses an int for enums, my data was not interpreted correctly. Delphi offers a compiler switch to work around this, so the declaration of the enum becomes {$Z4} TTypeofLight = ( V3d_AMBIENT, V3d_DIRECTIONAL, V3d_POSITIONAL, V3d_SPOT ); {$Z1} My Questions are: What will become of my structs when they are compiled on/for a 64-bit environment? Does the default c++ integer grow to 8 Bytes? Are there other memory alignment / data type size modifications (other than pointers)?

    Read the article

  • How to analyse contents of binary serialization stream?

    - by Tao
    I'm using binary serialization (BinaryFormatter) as a temporary mechanism to store state information in a file for a relatively complex (game) object structure; the files are coming out much larger than I expect, and my data structure includes recursive references - so I'm wondering whether the BinaryFormatter is actually storing multiple copies of the same objects, or whether my basic "number of objects and values I should have" arithmentic is way off-base, or where else the excessive size is coming from. Searching on stack overflow I was able to find the specification for Microsoft's binary remoting format: http://msdn.microsoft.com/en-us/library/cc236844(PROT.10).aspx What I can't find is any existing viewer that enables you to "peek" into the contents of a binaryformatter output file - get object counts and total bytes for different object types in the file, etc; I feel like this must be my "google-fu" failing me (what little I have) - can anyone help? This must have been done before, right??

    Read the article

  • Using objects with STL vector - minimal set of methods

    - by osgx
    Hello What is "minimal framework" (necessary methods) of object, which I will use with STL <vector>? For my assumptions: #include <vector> #include <cstring> using namespace std; class Doit { private: char *a; public: Doit(){a=(char*)malloc(10);} ~Doit(){free(a);} }; int main(){ vector<Doit> v(10); } gives *** glibc detected *** ./a.out: double free or corruption (fasttop): 0x0804b008 *** Aborted and in valgrind: malloc/free: 2 allocs, 12 frees, 50 bytes allocated.

    Read the article

  • How to put/get INT to/from a WCHAR array ?

    - by nimo
    how can I put a INT type variable to a wchar array ? Thanks EDIT: Sorry for the short question. Yes we can cast INT to a WCHAR array using WCHAR*, but when we are retrieving back the result (WCHAR[] to INT), I just realize that we need to read size of 2 from WCHAR array since INT is 4 BYTEs which is equal to 2 WCHARs. WCHAR arData[20]; INT iVal = 0; wmemcpy((WCHAR*)&iVal, arData, (sizeof(INT))/2); Is this the safest way to retrieve back INT value from WCHAR array

    Read the article

  • Copying a 14bit grayscale image (saved in long[]) to a pictureBox

    - by Itsik
    My camera gives me 14bit grayscale images, but the API's function returns a long* to the image data. (so i'm assuming 4 bytes for each pixel) My application is written in C++/CLI, and the pictureBox is of .NET type. I am currently using the BitmapData.LockBits() mechanism to gain pointer access to the image data, and using memcpy(bmpData.Scan0.ToPointer(), imageData, sizeof(long)*height*width) to copy the image data to the Bitmap. For now, the only PixelFormat that is working is 32bit RGB, and the image appears in shades of blue with contours. Trying to initialize the Bitmap as 16bppGrayscale isn't working. I would ideally want to cast the array from long to word and using a 16bit format (hoping the the 14bit data will be displayed properly) but I'm not sure if this works. Also, I don't want to iterate over the image data, so finding the min/max and then histogram stretching to [0..255] isnt an option for me (the display must be as efficient as possible) Thanks

    Read the article

  • MySQL PHP incompatibility.

    - by Evernoob
    Ok maybe I've overlooked something really simple here, but I can't seem to figure this out. I'm running WAMP locally, but connecting to a remote MySQL database. The local version of PHP is the latest 5.3.0. One of the remote databases, being version 5.0.45 works fine. However, the other remote database I'm trying to connect to, which is version 5.0.22 throws the following error before dying: Warning: mysql_connect() [function.mysql-connect]: OK packet 6 bytes shorter than expected. PID=5880 in ... Warning: mysql_connect() [function.mysql-connect]: mysqlnd cannot connect to MySQL 4.1+ using old authentication in ... WTF? UPDATE: Reverting to PHP 5.2.* i.e. anything lower than 5.3.0 resolves the problem completely. As long as I am not running 5.3.0 I can connect to both databases. I'm not sure what the explanation is for this weirdness.

    Read the article

  • Inserting into a bitstream

    - by evilertoaster
    I'm looking for a way to efficiently insert bits into a bitstream and have it 'overflow', padding with 0's. So for example if you had a byte array with 2 bytes: 231 and 109 (11100111 01101101), and did BitInsert(byteArray,4,00) it would insert two bits at bit offset 4 making 11100001 11011011 01000000 (225,219,24). It would be ok even the method only allowed 1 bit insertions e.g. BitInsert(byteArray,4,true) or BitInsert(byteArray,4,false). I have one method of doing it, but it has to walk the stream with a bitmask bit by bit, so I'm wondering if there's a simpler approach... Answers in assembly or a C derivative would be appreciated.

    Read the article

  • Does C# have an equivalent to JavaScript's encodeURIComponent()?

    - by travis
    In JavaScript: encodeURIComponent("©v") == "%C2%A9%E2%88%9A" Is there an equivalent for C# applications? For escaping HTML characters I used: txtOut.Text = Regex.Replace(txtIn.Text, @"[\u0080-\uFFFF]", m => @"&#" + ((int)m.Value[0]).ToString() + ";"); But I'm not sure how to convert the match to the correct hexadecimal format that JS uses. For example this code: txtOut.Text = Regex.Replace(txtIn.Text, @"[\u0080-\uFFFF]", m => @"%" + String.Format("{0:x}", ((int)m.Value[0]))); Returns "%a9%221a" for "©v" instead of "%C2%A9%E2%88%9A". It looks like I need to split the string up into bytes or something. Edit: This is for a windows app, the only items available in System.Web are: AspNetHostingPermission, AspNetHostingPermissionAttribute, and AspNetHostingPermissionLevel.

    Read the article

  • How do you access byte level information in JavaScript?

    - by JustSmith
    The generally accepted answer is that you can't. However there is mounting evidence that this is not true based on the existence of projects that read in types of data that are not basic HTML types. Some projects that do this are the JavaScript version of ProtoBuf and Smokescreen. Smokescreen is a flash interpreter written in JS so if it is not possible to get at the bytes directly how are these projects working around this? The source to Smokescreen can be found here. I have looked it over but with JS not being my primary language right now the solution eludes me.

    Read the article

  • Python ctypes argument errors

    - by Patrick Moriarty
    Hello. I wrote a test dll in C++ to make sure things work before I start using a more important dll that I need. Basically it takes two doubles and adds them, then returns the result. I've been playing around and with other test functions I've gotten returns to work, I just can't pass an argument due to errors. My code is: import ctypes import string nDLL = ctypes.WinDLL('test.dll') func = nDLL['haloshg_add'] func.restype = ctypes.c_double func.argtypes = (ctypes.c_double,ctypes.c_double) print(func(5.0,5.0)) It returns the error for the line that called "func": ValueError: Procedure probably called with too many arguments (8 bytes in excess) What am I doing wrong? Thanks.

    Read the article

  • Determine cluster size of file system in Python

    - by Philip Fourie
    I would like to calculate the "size on disk" of a file in Python. Therefore I would like to determine the cluster size of the file system where the file is stored. How do I determine the cluster size in Python? Or another built-in method that calculates the "size on disk" will also work. I looked at os.path.getsize but it returns the file size in bytes, not taking the FS's block size into consideration. I am hoping that this can be done in an OS independent way...

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >