Search Results

Search found 4337 results on 174 pages for 'binary runner'.

Page 4/174 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Binary socket and policy file in Flex

    - by Daniil
    Hi, I'm trying to evaluate whether Flex can access binary sockets. Seems that there's a class calles Socket (flex.net package). The requirement is that Flex will connect to a server serving binary data. It will then subscribe to data and receive the feed which it will interpret and display as a chart. I've never worked with Flex, my experience lies with Java, so everything is new to me. So I'm trying to quickly set something simple up. The Java server expects the following: DataInputStream in = ..... byte cmd = in.readByte(); int size = in.readByte(); byte[] buf = new byte[size]; in.readFully(buf); ... do some stuff and send binary data in something like out.writeByte(1); out.writeInt(10000); ... etc... Flex, needs to connect to a localhost:6666 do the handshake and read data. I got something like this: try { var socket:Socket = new Socket(); socket.connect('192.168.110.1', 9999); Alert.show('Connected.'); socket.writeByte(108); // 'l' socket.writeByte(115); // 's' socket.writeByte(4); socket.writeMultiByte('HHHH', 'ISO-8859-1'); socket.flush(); } catch (err:Error) { Alert.show(err.message + ": " + err.toString()); } The first thing is that Flex does a <policy-file-request/>. I've modified the server to respond with: <?xml version="1.0"?> <!DOCTYPE cross-domain-policy SYSTEM "/xml/dtds/cross-domain-policy.dtd"> <cross-domain-policy> <site-control permitted-cross-domain-policies="master-only"/> <allow-access-from domain="192.168.110.1" to-ports="*" /> </cross-domain-policy> After that - EOFException happens on the server and that's it. So the question is, am I approaching whole streaming data issue wrong when it comes to Flex? Am I sending the policy file wrong? Unfortunately, I can't seem to find a good solid example of how to do it. It seems to me that Flex can do binary Client-Server application, but I personally lack some basic knowledge when doing it. I'm using Flex 3.5 in IntelliJ IDEA IDE. Any help is appreciated. Thank you!

    Read the article

  • REST service with binary data

    - by user179437
    Hi I want to create Restful service which can accept binary data. I've implemented javax.xml.ws.Provider interface, but i can't get content of request. If I use javax.xml.ws.Dispatch then its send only XML data, but I need transfer binary data. Please give some solution, but I don't prefer to use JAX-RS or Restlets. Thanks.

    Read the article

  • Examples of attoparsec in parsing binary file formats?

    - by me2
    Previously attoparsec was suggested to me for parsing complex binary file formats. While I can find examples of attoparsec parsing HTTP, which is essentially text based, I cannot find an example parsing actual binary, for example, a TCP packet, or image file, or mp3. Can someone post some code or pointer to some code which does this using attoparsec?

    Read the article

  • VBScript - image to binary

    - by countnazgul
    Hi to all, i'm not a VB programmer but i need a vbscript that convert image file (from local disk) to be converted to binary data and the passed to webservice. I realize how to pass data to webservice but i can't find how to convert the image file to binary data. I spend a lot of time to find some kind of solution but with no luck. Can somebody help me? Thanks!

    Read the article

  • How to get the binary data from a download link

    - by jacob
    Hi all, I need to get the binary data of a download link. When I run this link in the browser it always starts download manager. Rather I would like to copy that binary and display it on teh browser. How is this possible in any language. Objective c or c# preferred. Thanks

    Read the article

  • How can i declare a Binary type

    - by hamza
    i need to work with a binary number i just write : const x = 00010000 ; & it doesn't work i know that i can use an hexadecimal number that have the same value as 00010000 but for more knowledge i want to know if there is a type in C++ for binary numbers & if there isn't ,is there another solution for my Problem ? thanks

    Read the article

  • Question on Binary Search Trees.

    - by AGeek
    Hi, I was thinking of implementing a binary search trees. I have implemented some very basic operations such as search, insert, delete. Please share your experiences as to what all other operations i could perform on binary search trees, and some real time operations(basic) that is needed every time for any given situation.. I hope my question was clear.. Thanks.

    Read the article

  • get a range of objects with binary search

    - by Behrooz
    I have some data like this: ID Value 1 AAA 1 ABC 2 dasd 2 dsfdsf 2 dsfsd 3 df 3 dwqef they are objects(not plain text). and i want to get all objects with the ID = 2. I can do a binary binary search and get the index 3,but how can i get (2 and 4) is there any efficient algorithm? the real problem has lists with about one Million items. any language except bf and lisp can help.

    Read the article

  • how to init binary buffer in python

    - by ace
    so, I read from DB binary field i.e. 'field1' to var Buf1, and then do something like: unpack_from('I', Buf1, 0) so, all is ok. but question is how can I ini Buf1 without going to DB? I can get value from DB manually and init my var statically, but how? in DB field 'field1' I see something like '0x7B0500000100000064000000B80100006'. and how can I init valid binary buffer from it?

    Read the article

  • VB6: Slow Binary Write?

    - by Tom the Junglist
    Wondering why a particular binary write operation in VB is so slow. The function reads a Byte array from memory and dumps it into a file like this: Open Destination For Binary Access Write As #1 Dim startP, endP As Long startP = BinaryStart endP = UBound(ReadBuf) - 1 Dim i as Integer For i = startP To endP DoEvents Put #1, (i - BinaryStart) + 1, ReadBuf(i) Next Close #1 For two megabytes on a slower system, this can take up to a minute. Can anyone tell me why this is so slow?

    Read the article

  • Is this the right strategy to convert an in-level order binary tree to a doubly linked list?

    - by Ankit Soni
    So I recently came across this question - Make a function that converts a in-level-order binary tree into a doubly linked list. Apparently, it's a common interview question. This is the strategy I had - Simply do a pre-order traversal of the tree, and instead of returning a node, return a list of nodes, in the order in which you traverse them. i.e return a list, and append the current node to the list at each point. For the base case, return the node itself when you are at a leaf. so you would say left = recursive_function(node.left) right = recursive_function(node.right) return(left.append(node.data)).append(right);` Is this the right approach?

    Read the article

  • How to have operations with character/items in binary with concrete operations?

    - by Piperoman
    I have the next problem. A item can have a lot of states: NORMAL = 0000000 DRY = 0000001 HOT = 0000010 BURNING = 0000100 WET = 0001000 COLD = 0010000 FROZEN = 0100000 POISONED= 1000000 A item can have some states at same time but not all of them Is impossible to be dry and wet at same time. If you COLD a WET item, it turns into FROZEN. If you HOT a WET item, it turns into NORMAL A item can be BURNING and POISON Etc. I have tried to set binary flags to states, and use AND to combine different states, checking before if it is possible or not to do it, or change to another status. Does there exist a concrete approach to solve this problem efficiently without having an interminable switch that checks every state with every new state? It is relatively easy to check 2 different states, but if there exists a third state it is not trivial to do.

    Read the article

  • How to have operations with character/items on binary with concrete operations on C++?

    - by Piperoman
    I have the next problem. A item can have a lot of states: NORMAL = 0000000 DRY = 0000001 HOT = 0000010 BURNING = 0000100 WET = 0001000 COLD = 0010000 FROZEN = 0100000 POISONED= 1000000 A item can have some states at same time but not all of them Is impossible to be dry and wet at same time. If you COLD a WET item, it turns into FROZEN. If you HOT a WET item, it turns into NORMAL A item can be BURNING and POISON Etc. I have tried to set binary flags to states, and use AND to combine different states, checking before if it is possible or not to do it, or change to another status. Does there exist a concrete approach to solve this problem efficiently without having an interminable switch that checks every state with every new state? It is relatively easy to check 2 different states, but if there exists a third state it is not trivial to do.

    Read the article

  • How to had operation with character/items on binary with concrete operations on C++?

    - by Piperoman
    I have the next problem. A item can had a lot of states: NORMAL = 0000000 DRY = 0000001 HOT = 0000010 BURNING = 0000100 WET = 0001000 COLD = 0010000 FROZEN = 0100000 POISONED= 1000000 A item can had some states at same time but not all of them Is impossible to be dry and wet at same time. If you COLD a WET item, it turns into FROZEN. If you HOT a WET item, it turns into NORMAL A item can be BURNING and POISON Etc. I have try to set binary flags to states, and use AND to set operation to combine different states, checking before if is possible or not to do it, or change to another status. Exist a concrete patron to solve this problem efficiently without had a interminable switch that check every states with everynew states? It is relative easy to check 2 different states, but if exist a third state it is not trivial to do.

    Read the article

  • What web oriented language would work best with binary data?

    - by Qqwy
    I want to create a service where people can upload files. However, since file storage costs money, I want to compress the files so they take less space. I would want to write my own compression algorithm, however, PHP doesn't have good ways to handle binary data (which is needed for many compression algorithms). So I wondered, what would be a better language to create such a website in? I have knowledge of PHP (and Javascript, HTML and CSS) but no experience with other things like Ruby, Perl, Python, and other web development languages.

    Read the article

  • improving conversions to binary and back in C#

    - by Saad Imran.
    I'm trying to write a general purpose socket server for a game I'm working on. I know I could very well use already built servers like SmartFox and Photon, but I wan't to go through the pain of creating one myself for learning purposes. I've come up with a BSON inspired protocol to convert the the basic data types, their arrays, and a special GSObject to binary and arrange them in a way so that it can be put back together into object form on the client end. At the core, the conversion methods utilize the .Net BitConverter class to convert the basic data types to binary. Anyways, the problem is performance, if I loop 50,000 times and convert my GSObject to binary each time it takes about 5500ms (the resulting byte[] is just 192 bytes per conversion). I think think this would be way too slow for an MMO that sends 5-10 position updates per second with a 1000 concurrent users. Yes, I know it's unlikely that a game will have a 1000 users on at the same time, but like I said earlier this is supposed to be a learning process for me, I want to go out of my way and build something that scales well and can handle at least a few thousand users. So yea, if anyone's aware of other conversion techniques or sees where I'm loosing performance I would appreciate the help. GSBitConverter.cs This is the main conversion class, it adds extension methods to main datatypes to convert to the binary format. It uses the BitConverter class to convert the base types. I've shown only the code to convert integer and integer arrays, but the rest of the method are pretty much replicas of those two, they just overload the type. public static class GSBitConverter { public static byte[] ToGSBinary(this short value) { return BitConverter.GetBytes(value); } public static byte[] ToGSBinary(this IEnumerable<short> value) { List<byte> bytes = new List<byte>(); short length = (short)value.Count(); bytes.AddRange(length.ToGSBinary()); for (int i = 0; i < length; i++) bytes.AddRange(value.ElementAt(i).ToGSBinary()); return bytes.ToArray(); } public static byte[] ToGSBinary(this bool value); public static byte[] ToGSBinary(this IEnumerable<bool> value); public static byte[] ToGSBinary(this IEnumerable<byte> value); public static byte[] ToGSBinary(this int value); public static byte[] ToGSBinary(this IEnumerable<int> value); public static byte[] ToGSBinary(this long value); public static byte[] ToGSBinary(this IEnumerable<long> value); public static byte[] ToGSBinary(this float value); public static byte[] ToGSBinary(this IEnumerable<float> value); public static byte[] ToGSBinary(this double value); public static byte[] ToGSBinary(this IEnumerable<double> value); public static byte[] ToGSBinary(this string value); public static byte[] ToGSBinary(this IEnumerable<string> value); public static string GetHexDump(this IEnumerable<byte> value); } Program.cs Here's the the object that I'm converting to binary in a loop. class Program { static void Main(string[] args) { GSObject obj = new GSObject(); obj.AttachShort("smallInt", 15); obj.AttachInt("medInt", 120700); obj.AttachLong("bigInt", 10900800700); obj.AttachDouble("doubleVal", Math.PI); obj.AttachStringArray("muppetNames", new string[] { "Kermit", "Fozzy", "Piggy", "Animal", "Gonzo" }); GSObject apple = new GSObject(); apple.AttachString("name", "Apple"); apple.AttachString("color", "red"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.5); GSObject lemon = new GSObject(); apple.AttachString("name", "Lemon"); apple.AttachString("color", "yellow"); apple.AttachBool("inStock", false); apple.AttachFloat("price", (float)0.8); GSObject apricoat = new GSObject(); apple.AttachString("name", "Apricoat"); apple.AttachString("color", "orange"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.9); GSObject kiwi = new GSObject(); apple.AttachString("name", "Kiwi"); apple.AttachString("color", "green"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)2.3); GSArray fruits = new GSArray(); fruits.AddGSObject(apple); fruits.AddGSObject(lemon); fruits.AddGSObject(apricoat); fruits.AddGSObject(kiwi); obj.AttachGSArray("fruits", fruits); Stopwatch w1 = Stopwatch.StartNew(); for (int i = 0; i < 50000; i++) { byte[] b = obj.ToGSBinary(); } w1.Stop(); Console.WriteLine(BitConverter.IsLittleEndian ? "Little Endian" : "Big Endian"); Console.WriteLine(w1.ElapsedMilliseconds + "ms"); } Here's the code for some of my other classes that are used in the code above. Most of it is repetitive. GSObject GSArray GSWrappedObject

    Read the article

  • Binary files printing and desired precision

    - by yCalleecharan
    Hi, I'm printing a variable say z1 which is a 1-D array containing floating point numbers to a text file so that I can import into Matlab or GNUPlot for plotting. I've heard that binary files (.dat) are smaller than .txt files. The definition that I currently use for printing to a .txt file is: void create_out_file(const char *file_name, const long double *z1, size_t z_size){ FILE *out; size_t i; if((out = _fsopen(file_name, "w+", _SH_DENYWR)) == NULL){ fprintf(stderr, "***> Open error on output file %s", file_name); exit(-1); } for(i = 0; i < z_size; i++) fprintf(out, "%.16Le\n", z1[i]); fclose(out); } I have three questions: Are binary files really more compact than text files?; If yes, I would like to know how to modify the above code so that I can print the values of the array z1 to a binary file. I've read that fprintf has to be replaced with fwrite. My output file say dodo.dat should contain the values of array z1 with one floating number per line. I have %.16Le up in my code but I think that %.15Le is right as I have 15 precision digits with long double. I have put a dot (.) in the width position as I believe that this allows expansion to an arbitrary field to hold the desired number. Am I right? As an example with %.16Le, I can have an output like 1.0047914240730432e-002 which gives me 16 precision digits and the width of the field has the right width to display the number correctly. Is placing a dot (.) in the width position instead of a width value a good practice? Thanks a lot...

    Read the article

  • how to send binary data within an xml string

    - by daemonkid
    I want to send a binary file to .net c# component in the following xml format <BinaryFileString fileType='pdf'> <!--binary file data string here--> </BinaryFileString> In the component that is called I will use the above xml string and convert the binary string recieved within the BinaryFileString tag, into a file as specified by the filetype='' attribute. The file type could be doc/pdf/xls/rtf I have the code in the calling application to get out the bytes from the file to be sent. How do I prepare it to be sent with xml tags wrapped around it? I want the application to send out a string to the component and not a byte stream. This is because there is no way I can decipher the file type [pdf/doc/xls] by just looking at the byte stream. Hence the xml string with the filetype attribute. Any ideas on this? method for extracting Bytes below FileStream fs = new FileStream(_filePath, FileMode.Open, FileAccess.Read); using (Stream input = fs) { byte[] buffer = new byte[8192]; int bytesRead; while ((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0) { } } return buffer; Thanks.

    Read the article

  • Boost Binary Endian parser not working?

    - by Hai
    I am studying how to use boost spirit Qi binary endian parser. I write a small test parser program according to here and basics examples, but it doesn't work proper. It gave me the msg:"Error:no match". Here is my code. #include "boost/spirit/include/qi.hpp" #include "boost/spirit/include/phoenix_core.hpp" #include "boost/spirit/include/phoenix_operator.hpp" #include "boost/spirit/include/qi_binary.hpp" // parsing binary data in various endianness template '<'typename P, typename T void binary_parser( char const* input, P const& endian_word_type, T& voxel, bool full_match = true) { using boost::spirit::qi::parse; char const* f(input); char const* l(f + strlen(f)); bool result1 = parse(f,l,endian_word_type,voxel); bool result2 =((!full_match) || (f ==l)); if ( result1 && result2) { //doing nothing, parsing data is pass to voxel alreay } else { std::cerr << "Error: not match!!" << std::endl; exit(1); } } typedef boost::uint16_t bs_int16; typedef boost::uint32_t bs_int32; int main ( int argc, char *argv[] ) { namespace qi = boost::spirit::qi; namespace ascii = boost::spirit::ascii; using qi::big_word; using qi::big_dword; boost::uint32_t ui; float uf; binary_parser("\x01\x02\x03\x04",big_word,ui); assert(ui=0x01020304); binary_parser("\x01\x02\x03\x04",big_word,uf); assert(uf=0x01020304); return 0; }' I almost copy the example, but why this binary parser doesn't work. I use Mac OS 10.5.8 and gcc 4.01 compiler.

    Read the article

  • Mac 10.6 Universal Binary scipy: cephes/specfun "_aswfa_" symbol not found

    - by Markus
    Hi folks, I can't get scipy to function in 32 bit mode when compiled as a i386/x86_64 universal binary, and executed on my 64 bit 10.6.2 MacPro1,1. My python setup With the help of this answer, I built a 32/64 bit intel universal binary of python 2.6.4 with the intention of using the arch command to select between the architectures. (I managed to make some universal binaries of a few libraries I wanted using lipo.) That all works. I then installed scipy according to the instructions on hyperjeff's article, only with more up-to-date numpy (1.4.0) and skipping the bit about moving numpy aside briefly during the installation of scipy. Now, everything except scipy seems to be working as far as I can tell, and I can indeed select between 32 and 64 bit mode using arch -i386 python and arch -x86_64 python. The error Scipy complains in 32 bit mode: $ arch -x86_64 python -c "import scipy.interpolate; print 'success'" success $ arch -i386 python -c "import scipy.interpolate; print 'success'" Traceback (most recent call last): File "<string>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/interpolate/__init__.py", line 7, in <module> from interpolate import * File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/interpolate/interpolate.py", line 13, in <module> import scipy.special as spec File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/__init__.py", line 8, in <module> from basic import * File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/basic.py", line 8, in <module> from _cephes import * ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so, 2): Symbol not found: _aswfa_ Referenced from: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so Expected in: flat namespace in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so Attempt at tracking down the problem It looks like scipy.interpolate imports something called _cephes, which looks for a symbol called _aswfa_ but can't find it in 32 bit mode. Browsing through scipy's source, I find an ASWFA subroutine in specfun.f. The only scipy product file with a similar name is specfun.so, but both that and _cephes.so appear to be universal binaries: $ cd /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/ $ file _cephes.so specfun.so _cephes.so: Mach-O universal binary with 2 architectures _cephes.so (for architecture i386): Mach-O bundle i386 _cephes.so (for architecture x86_64): Mach-O 64-bit bundle x86_64 specfun.so: Mach-O universal binary with 2 architectures specfun.so (for architecture i386): Mach-O bundle i386 specfun.so (for architecture x86_64): Mach-O 64-bit bundle x86_64 Ho hum. I'm stuck. Things I may try but haven't figured out how yet include compiling specfun.so myself manually, somehow. I would imagine that scipy isn't broken for all 32 bit machines, so I guess something is wrong with the way I've installed it, but I can't figure out what. I don't really expect a full answer given my fairly unique (?) setup, but if anyone has any clues that might point me in the right direction, they'd be greatly appreciated. (edit) More details to address questions: I'm using gfortran (GNU Fortran from GCC 4.2.1 Apple Inc. build 5646). Python 2.6.4 was installed more-or-less like so: cd /tmp curl -O http://www.python.org/ftp/python/2.6.4/Python-2.6.4.tar.bz2 tar xf Python-2.6.4.tar.bz2 cd Python-2.6.4 # Now replace buggy pythonw.c file with one that supports the "arch" command: curl http://bugs.python.org/file14949/pythonw.c | sed s/2.7/2.6/ > Mac/Tools/pythonw.c ./configure --enable-framework=/Library/Frameworks --enable-universalsdk=/ --with-universal-archs=intel make -j4 sudo make frameworkinstall Scipy 0.7.1 was installed pretty much as described as here, but it boils down to a simple sudo python setup.py install. It would indeed appear that the symbol is undefined in the i386 architecture if you look at the _cephes library with nm, as suggested by David Cournapeau: $ nm -arch x86_64 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so | grep _aswfa_ 00000000000d4950 T _aswfa_ 000000000011e4b0 d _oblate_aswfa_data 000000000011e510 d _oblate_aswfa_nocv_data (snip) $ nm -arch i386 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so | grep _aswfa_ U _aswfa_ 0002e96c d _oblate_aswfa_data 0002e99c d _oblate_aswfa_nocv_data (snip) however, I can't yet explain its absence.

    Read the article

  • Read half precision float (float16 IEEE 754r) binary data in matlab

    - by Michael
    you have been a great help last time, i hope you can give me some advise this time, too. I read a binary file into matlab with bit16 (format = bitn) and i get a string of ones and zeros. bin = '1 00011 1111111111' (16 bits: 1. sign, 2-6. exponent, 7-16. mantissa) According to ftp://www.fox-toolkit.org/pub/fasthalffloatconversion.pdf it can be 'converted' like out = (-1)^bin(1) * 2^(bin(2:6)-15) * 1.bin(7:16) [are exponent and mantissa still binary?] Can someone help me out and tell me how to deal with the 'eeeee' and '1.mmmmmmmmmm' as mentioned in the pdf, please. Thanks a lot! Michael

    Read the article

  • Convert IP address string to binary in Python

    - by pizzim13
    As part of a larger application, I am trying to convert an IP address to binary. Purpose being to later calculate the broadcast address for Wake on LAN traffic. I am assuming that there is a much more efficient way to do this then the way I am thinking. Which is breaking up the IP address by octet, adding 0's to the beginning of each octet where necessary, converting each octet to binary, then combining the results. Should I be looking at netaddr, sockets, or something completely different? Example: From 192.168.1.1 to 11000000.10101000.00000001.00000001

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >