Search Results

Search found 5662 results on 227 pages for 'processor socket'.

Page 213/227 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • Php miniserver downloader bandwidth

    - by snikolov
    $httpsock = @socket_create_listen("9090"); if (!$httpsock) { print "Socket creation failed!\n"; exit; } while (1) { $client = socket_accept($httpsock); $input = trim(socket_read ($client, 4096)); $input = explode(" ", $input); $input = $input[1]; $fileinfo = pathinfo($input); switch ($fileinfo['extension']) { default: $mime = "text/html"; } if ($input == "/") { $input = "index.html"; } $input = ".$input"; if (file_exists($input) && is_readable($input)) { echo "Serving $input\n"; $contents = file_get_contents($input); $output = "HTTP/1.0 200 OK\r\nServer: APatchyServer\r\nConnection: close\r\nContent-Type: $mime\r\n\r\n$contents"; } else { //$contents = "The file you requested doesn't exist. Sorry!"; //$output = "HTTP/1.0 404 OBJECT NOT FOUND\r\nServer: BabyHTTP\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n$contents"; $filename = "dada"; $file = fopen($filename, 'r'); $filesize = filesize($filename); $buffer = fread($file, $filesize); $send = array("Output"=$buffer,"filesize"=$filesize,"filename"=$filename); $file = $send['filename']; $filesize = $send['filesize']; $output = 'HTTP/1.0 200 OK\r\n'; $output .= "Content-type: application/octet-stream\r\n"; $output .= 'Content-Disposition: attachment; filename="'.$file.'"\r\n'; $output .= "Content-Length:$filesize\r\n"; $output .= "Accept-Ranges: bytes\r\n"; $output .= "Cache-Control: private\n\n"; $output .= $send['Output']; $output .= "Content-Transfer-Encoding: binary"; $output .= "Connection: Keep-Alive\r\n"; } socket_write($client, $output); socket_close ($client); } socket_close ($httpsock); Hello i have create a code here to work as a webserver downloader the client can request files and download, i got it working at last however, i would like to know if the webserver can show much bandwidth the user request via sockets, perl has the same option as php but its more hardcore than php, i dont understand much about perl, i even saw that a miniwebserver can show much the client user pulls from the server would it be possible that you can assist me with this coding, i much aprreciate it thank you guys.

    Read the article

  • transforming binary data using ssis and sql server 2008

    - by Rick
    Hello All - I have a task to import/transform and extract zipped binary files that contain both text data as well as embeded binary data. Within the data is data that is relational in nature and needs to be processed into a defined database structure. Currently I have a C# single threaded app that essentially grabs all the files from the directory (currently there is 13K files of varying sizes) and extracts the data on a single thread line by line inserts to the database. As you could imagine this is a very slow process and unacceptable. There are several different parsing routines used depending on the header record in the file. There are potentially upto a million rows per file when all the data is extracted to the row level of detail. Follow on task is to parse those rows into their appropriate tables based on is content. i.e. the textual content has to be parsed further into "buckets" of like data in the database. That about sums up the big picture. Now for the problem task list. How do i iterate through a packet of data using SSIS? In the app the file is decompressed and then is parsed using streams data type and byte arrays and is routed to the required parsing routine based on the header data of each packet. There is bit swapping involved as well. Should i wrap up the app code into a script task(s) and let it do the custom processing? The data is seperated by year and the sql server tables is partitioned by year as well. I need to be able to "catch" bad file data as well and process by hand most likely. Should i simply load the zipped file to sql as a blob and parse the file with T-SQL? Would that be multi threaded if done that way? Not sure how to do the parsing in tsql that is involved here. Which do you think would be faster? Potentially the data that is currently processed via files could come to us via a socket. Can SSIS collect that data in real time? How would i go about setting that up? Processing these new files from the directorys will become a daily task. I can manage the data once i get it to sql server. Getting it there in a timely fashion seems to be the long pole in the tent for me. I would appreciate any comments or suggestions from the group. Rick

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • PHP miniwebsever file download

    - by snikolov
    $httpsock = @socket_create_listen("9090"); if (!$httpsock) { print "Socket creation failed!\n"; exit; } while (1) { $client = socket_accept($httpsock); $input = trim(socket_read ($client, 4096)); $input = explode(" ", $input); $input = $input[1]; $fileinfo = pathinfo($input); switch ($fileinfo['extension']) { default: $mime = "text/html"; } if ($input == "/") { $input = "index.html"; } $input = ".$input"; if (file_exists($input) && is_readable($input)) { echo "Serving $input\n"; $contents = file_get_contents($input); $output = "HTTP/1.0 200 OK\r\nServer: APatchyServer\r\nConnection: close\r\nContent-Type: $mime\r\n\r\n$contents"; } else { //$contents = "The file you requested doesn't exist. Sorry!"; //$output = "HTTP/1.0 404 OBJECT NOT FOUND\r\nServer: BabyHTTP\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n$contents"; function openfile() { $filename = "a.pl"; $file = fopen($filename, 'r'); $filesize = filesize($filename); $buffer = fread($file, $filesize); $array = array("Output"=$buffer,"filesize"=$filesize,"filename"=$filename); return $array; } $send = openfile(); $file = $send['filename']; $filesize = $send['filesize']; $output = 'HTTP/1.0 200 OK\r\n'; $output .= "Content-type: application/octet-stream\r\n"; $output .= 'Content-Disposition: attachment; filename="'.$file.'"\r\n'; $output .= "Content-Length:$filesize\r\n"; $output .= "Accept-Ranges: bytes\r\n"; $output .= "Cache-Control: private\n\n"; $output .= $send['Output']; $output .= "Content-Transfer-Encoding: binary"; $output .= "Connection: Keep-Alive\r\n"; } socket_write($client, $output); socket_close ($client); } socket_close ($httpsock); Hello, I am snikolov i am creating a miniwebserver with php and i would like to know how i can send the client a file to download with his browser such as firefox or internet explore i am sending a file to the user to download via sockets, but the cleint is not getting the filename and the information to download can you please help me here,if i declare the file again i get this error in my server Fatal error: Cannot redeclare openfile() (previously declared in C:\User s\fsfdsf\sfdsfsdf\httpd.php:31) in C:\Users\hfghfgh\hfghg\httpd.php on li ne 29, if its possible, i would like to know if the webserver can show much banwdidth the user request via sockets, perl has the same option as php but its more hardcore than php i dont understand much about perl, i even saw that a miniwebserver can show much the client user pulls from the server would it be possible that you can assist me with this coding, i much aprreciate it thank you guys.

    Read the article

  • Tools to Help out with

    - by peter
    I am developing a C# .NET application. In the app.config file I add trace logging as shown, <?xml version="1.0" encoding="UTF-8" ?> <configuration> <system.diagnostics> <trace autoflush="true" /> <sources> <source name="System.Net.Sockets" maxdatasize="1024"> <listeners> <add name="MyTraceFile"/> </listeners> </source> </sources> <sharedListeners> <add name="MyTraceFile" type="System.Diagnostics.TextWriterTraceListener" initializeData="System.Net.trace.log" /> </sharedListeners> <switches> <add name="System.Net" value="Verbose" /> </switches> </system.diagnostics> </configuration> Are there any good tools around to analyse the log file that is output? The output looks like this, System.Net.Sockets Verbose: 0 : [5900] Data from Socket#8764489::Send DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000000 : 4D 49 4D 45 2D 56 65 72-73 69 6F 6E 3A 20 31 2E : MIME-Version: 1. DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000060 : 65 3A 20 37 20 41 70 72-20 32 30 31 30 20 31 35 : e: 7 Apr 2010 15 DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000070 : 3A 32 32 3A 34 30 20 2B-31 32 30 30 0D 0A 53 75 : :22:40 +1200..Su DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000080 : 62 6A 65 63 74 3A 20 5B-45 72 72 6F 72 5D 20 45 : bject: [Error] E DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000090 : 78 63 65 70 74 69 6F 6E-20 69 6E 20 53 79 6E 63 : xception in Sync DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 000000A0 : 53 65 72 76 69 63 65 20-28 32 30 30 38 2E 30 2E : Service (2008.0. DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 000000B0 : 33 30 34 2E 31 32 33 34-32 29 0D 0A 43 6F 6E 74 : 304.12342)..Cont DateTime=2010-04-07T03:22:40.1067012Z Is there anything that can take the output shown above (my output is a text file 100mb in size), group together packets, and help out with finding particular issues I would like to hear about it. Thanks.

    Read the article

  • Anybody Know of any Tools to help Analysing .NET Trace Log Files?

    - by peter
    I am developing a C# .NET application. In the app.config file I add trace logging as shown, <?xml version="1.0" encoding="UTF-8" ?> <configuration> <system.diagnostics> <trace autoflush="true" /> <sources> <source name="System.Net.Sockets" maxdatasize="1024"> <listeners> <add name="MyTraceFile"/> </listeners> </source> </sources> <sharedListeners> <add name="MyTraceFile" type="System.Diagnostics.TextWriterTraceListener" initializeData="System.Net.trace.log" /> </sharedListeners> <switches> <add name="System.Net" value="Verbose" /> </switches> </system.diagnostics> </configuration> Are there any good tools around to analyse the log file that is output? The output looks like this, System.Net.Sockets Verbose: 0 : [5900] Data from Socket#8764489::Send DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000000 : 4D 49 4D 45 2D 56 65 72-73 69 6F 6E 3A 20 31 2E : MIME-Version: 1. DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000060 : 65 3A 20 37 20 41 70 72-20 32 30 31 30 20 31 35 : e: 7 Apr 2010 15 DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000070 : 3A 32 32 3A 34 30 20 2B-31 32 30 30 0D 0A 53 75 : :22:40 +1200..Su DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000080 : 62 6A 65 63 74 3A 20 5B-45 72 72 6F 72 5D 20 45 : bject: [Error] E DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 00000090 : 78 63 65 70 74 69 6F 6E-20 69 6E 20 53 79 6E 63 : xception in Sync DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 000000A0 : 53 65 72 76 69 63 65 20-28 32 30 30 38 2E 30 2E : Service (2008.0. DateTime=2010-04-07T03:22:40.1067012Z System.Net.Sockets Verbose: 0 : [5900] 000000B0 : 33 30 34 2E 31 32 33 34-32 29 0D 0A 43 6F 6E 74 : 304.12342)..Cont DateTime=2010-04-07T03:22:40.1067012Z Is there anything that can take the output shown above (my output is a text file 100mb in size), group together packets, and help out with finding particular issues I would like to hear about it. Thanks.

    Read the article

  • Coolstack MySQL Crash Unable to Restart

    - by rayblasdel
    Environment: Solaris 10 This mysql server has been up and running for 6 months now. Today all of a sudden it crashed. When typing 'mysql' as user it gives the error MYSQL" Error 2002 (HY000): Can't Connect to Local MySQL server though socket '/tmp/mysql.sock' The server try to open mysql, it stays open for 9-10 seconds and restarts the process. Below are the application logs. Application-database-mysql_mysql-csk.log [ May 30 22:37:52 Enabled. ] [ May 30 22:37:58 Rereading configuration. ] [ May 30 22:37:59 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:37:59 Method "start" exited with status 0 ] [ May 30 22:38:13 Stopping because all processes in service exited. ] [ May 30 22:38:13 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:13 Method "stop" exited with status 0 ] [ May 30 22:38:13 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:38:13 Method "start" exited with status 0 ] [ May 30 22:38:25 Stopping because all processes in service exited. ] [ May 30 22:38:25 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:25 Method "stop" exited with status 0 ] I am hoping someone might have run into this before and might know how to fix it. The following is an excerpt from the MySQL Error log 100530 22:44:03 mysqld_safe mysqld from pid file /dbpool1/data/database.soliaonline.com.pid ended 100530 22:44:04 mysqld_safe Starting mysqld daemon with databases from /dbpool1/data InnoDB: Log scan progressed past the checkpoint lsn 32 727170612 100530 22:44:13 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Doing recovery: scanned up to log sequence number 32 727200901 100530 22:44:14 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percents: 100530 22:44:14 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=209715200 read_buffer_size=1048576 max_used_connections=0 max_threads=10000 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 31024253 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. enter code here

    Read the article

  • Error using paho-mqtt in App Engine Python App

    - by calumb
    I am trying to right a Google Cloud Platform app in python with Flask that makes an MQTT connection. I have included the paho python library by doing pip install paho-mqtt -t libs/. However, when I try to run the app, even if I don't try to connect to MQTT. I get a weird error about IP address checking: RuntimeError: error('illegal IP address string passed to inet_pton',) It seems something in the remote_socket lib is causing a problem. Is this a security issue? Is there someway to disable it? Relevant code: from flask import Flask import paho.mqtt.client as mqtt import logging as logger app = Flask(__name__) # Note: We don't need to call run() since our application is embedded within # the App Engine WSGI application server. #callback to print out connection status def on_connect(mosq, obj, rc): logger.info('on_connect') if rc == 0: logger.info("Connected") mqttc.subscribe('test', 0) else: logger.info(rc) def on_message(mqttc, obj, msg): logger.info(msg.topic+" "+str(msg.qos)+" "+str(msg.payload)) mqttc = mqtt.Client("mqttpy") mqttc.on_message = on_message mqttc.on_connect = on_connect As well as full stack trace: ERROR 2014-06-03 15:14:57,285 wsgi.py:262] Traceback (most recent call last): File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 239, in Handle handler = _config_handle.add_wsgi_middleware(self._LoadHandler()) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 298, in _LoadHandler handler, path, err = LoadObject(self._handler) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 84, in LoadObject obj = __import__(path[0]) File "/Users/cbarnes/code/ignite/tank-demo/appengine-flask-demo/main.py", line 24, in <module> mqttc = mqtt.Client("mqtthtpp") File "/Users/cbarnes/code/ignite/tank-demo/appengine-flask-demo/lib/paho/mqtt/client.py", line 403, in __init__ self._sockpairR, self._sockpairW = _socketpair_compat() File "/Users/cbarnes/code/ignite/tank-demo/appengine-flask-demo/lib/paho/mqtt/client.py", line 255, in _socketpair_compat listensock.bind(("localhost", 0)) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/dist27/socket.py", line 222, in meth return getattr(self._sock,name)(*args) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/remote_socket/_remote_socket.py", line 668, in bind self._SetProtoFromAddr(request.mutable_proxy_external_ip(), address) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/remote_socket/_remote_socket.py", line 632, in _SetProtoFromAddr proto.set_packed_address(self._GetPackedAddr(address)) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/remote_socket/_remote_socket.py", line 627, in _GetPackedAddr AI_NUMERICSERV|AI_PASSIVE): File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/remote_socket/_remote_socket.py", line 338, in getaddrinfo canonical=(flags & AI_CANONNAME)) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/remote_socket/_remote_socket.py", line 211, in _Resolve canon, aliases, addresses = _ResolveName(name, families) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/remote_socket/_remote_socket.py", line 229, in _ResolveName apiproxy_stub_map.MakeSyncCall('remote_socket', 'Resolve', request, reply) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall return stubmap.MakeSyncCall(service, call, request, response) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall rpc.CheckSuccess() File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl self.request, self.response) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall self._MakeRealSyncCall(service, call, request, response) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 234, in _MakeRealSyncCall raise pickle.loads(response_pb.exception()) RuntimeError: error('illegal IP address string passed to inet_pton',) INFO 2014-06-03 15:14:57,291 module.py:639] default: "GET / HTTP/1.1" 500 - Thanks!

    Read the article

  • XmlHttpRequest bug?

    - by valdo
    Hello all. I'm writing a program that among other things needs to download a file given its URL. I'm too lazy to implement the Http/Https protocols manually, so that I needed some library/object/function that'll do the job. Critical requirement: The download must be asynchronous. That is, the thread that issued the download must be able to do something else "while" downloading the file, plus the download must be able to be aborted anytime without any barbaric side effects (such as internal call to TerminateThread). Nice-to-have requirements: Should be able to download the file "into memory". Means - read the contents of the file as they arrive, not necessarily save it into some "file system" file. It'd be nice to have some convenient Win32 progress notification mechanism (waitable event, semahpore, completion port, etc.), rather than just periodically polling the download status. I've chosen the XmlHttpRequest COM object to do the work. It seemed to work fine enough, plus it supported asynchronous mode. However I noticed that after some period it just stops working. That is, after several successful file downloads it stops downloading anything. I periodically poll it to get its status, it reports "in-progress", but nothing actually happens, and there's no network activity. Moreover, when the same process creates another instance of XmlHttpRequest object to perform new downloads - the effect is the same. The object reports "in progress", whereas it doesn't even try to connect to the server (according to network sniffers and system TCP state). The only way to make this object work back is to restart the process. This makes me suspect that there's a sort of a bug (sorry, I meant undocumented feature) in the object. Also it's not a bug at the level of an individual object, since the problem persists when the object is destroyed and another one is created. It's probably some global state of the DLL that implements this object. Does anyone know something about this? Is this a known bug? I'm pretty sure there's no chance that I have another bug in my code, because of which it seems to me to be the bug is in the XmlHttpRequest. I've done enoughtests and spent time with the debugger to conclude without reasonable doubt that it's just the object stops working. BTW, while the object should work, I do all the waiting via MsgWaitXXXX API calls. So that if this object needs the message loop to work properly (for instance, it may create a hidden notification window and bind it to a socket via WSAAsyncSelect) - I give it the opportunity.

    Read the article

  • How do I do high quality scaling of a image?

    - by pbhogan
    I'm writing some code to scale a 32 bit RGBA image in C/C++. I have written a few attempts that have been somewhat successful, but they're slow and most importantly the quality of the sized image is not acceptable. I compared the same image scaled by OpenGL (i.e. my video card) and my routine and it's miles apart in quality. I've Google Code Searched, scoured source trees of anything I thought would shed some light (SDL, Allegro, wxWidgets, CxImage, GD, ImageMagick, etc.) but usually their code is either convoluted and scattered all over the place or riddled with assembler and little or no comments. I've also read multiple articles on Wikipedia and elsewhere, and I'm just not finding a clear explanation of what I need. I understand the basic concepts of interpolation and sampling, but I'm struggling to get the algorithm right. I do NOT want to rely on an external library for one routine and have to convert to their image format and back. Besides, I'd like to know how to do it myself anyway. :) I have seen a similar question asked on stack overflow before, but it wasn't really answered in this way, but I'm hoping there's someone out there who can help nudge me in the right direction. Maybe point me to some articles or pseudo code... anything to help me learn and do. Here's what I'm looking for: 1. No assembler (I'm writing very portable code for multiple processor types). 2. No dependencies on external libraries. 3. I am primarily concerned with scaling DOWN, but will also need to write a scale up routine later. 4. Quality of the result and clarity of the algorithm is most important (I can optimize it later). My routine essentially takes the following form: DrawScaled( uint32 *src, uint32 *dst, src_x, src_y, src_w, src_h, dst_x, dst_y, dst_w, dst_h ); Thanks! UPDATE: To clarify, I need something more advanced than a box resample for downscaling which blurs the image too much. I suspect what I want is some kind of bicubic (or other) filter that is somewhat the reverse to a bicubic upscaling algorithm (i.e. each destination pixel is computed from all contributing source pixels combined with a weighting algorithm that keeps things sharp. EXAMPLE: Here's an example of what I'm getting from the wxWidgets BoxResample algorithm vs. what I want on a 256x256 bitmap scaled to 55x55. And finally: the original 256x256 image

    Read the article

  • Bad_alloc exception when using new for a struct c++

    - by bsg
    Hi, I am writing a query processor which allocates large amounts of memory and tries to find matching documents. Whenever I find a match, I create a structure to hold two variables describing the document and add it to a priority queue. Since there is no way of knowing how many times I will do this, I tried creating my structs dynamically using new. When I pop a struct off the priority queue, the queue (STL priority queue implementation) is supposed to call the object's destructor. My struct code has no destructor, so I assume a default destructor is called in that case. However, the very first time that I try to create a DOC struct, I get the following error: Unhandled exception at 0x7c812afb in QueryProcessor.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x0012f5dc.. I don't understand what's happening - have I used up so much memory that the heap is full? It doesn't seem likely. And it's not as if I've even used that pointer before. So: first of all, what am I doing that's causing the error, and secondly, will the following code work more than once? Do I need to have a separate pointer for each struct created, or can I re-use the same temporary pointer and assume that the queue will keep a pointer to each struct? Here is my code: struct DOC{ int docid; double rank; public: DOC() { docid = 0; rank = 0.0; } DOC(int num, double ranking) { docid = num; rank = ranking; } bool operator>( const DOC & d ) const { return rank > d.rank; } bool operator<( const DOC & d ) const { return rank < d.rank; } }; //a lot of processing goes on here; when a matching document is found, I do this: rank = calculateRanking(table, num); //if the heap is not full, create a DOC struct with the docid and rank and add it to the heap if(q.size() < 20) { doc = new DOC(num, rank); q.push(*doc); doc = NULL; } //if the heap is full, but the new rank is greater than the //smallest element in the min heap, remove the current smallest element //and add the new one to the heap else if(rank > q.top().rank) { q.pop(); cout << "pushing doc on to queue" << endl; doc = new DOC(num, rank); q.push(*doc); } Thank you very much, bsg.

    Read the article

  • XML pass values to timer, AS3

    - by VideoDnd
    My timer has three variables that I can trace to the output window, but don't know how to pass them to the timer. How to I pass the XML values to my timer? Purpose I want to test with an XML document, before I try connecting it to an XML socket. myXML <?xml version="1.0" encoding="utf-8"?> <SESSION> <TIMER TITLE="speed">100</TIMER> <COUNT TITLE="starting position">-77777</COUNT> <FCOUNT TITLE="ramp">1000</FCOUNT> </SESSION> myFlash //myTimer 'instance of mytext on stage' /* fields I want to change with XML */ //CHANGE TO 100 var timer:Timer = new Timer(10); //CHANGE TO -77777 var count:int = 0; //CHANGE TO 1000 var fcount:int = 0; timer.addEventListener(TimerEvent.TIMER, incrementCounter); timer.start(); function incrementCounter(event:TimerEvent) { count++; fcount=int(count*count/1000);//starts out slow... then speeds up mytext.text = formatCount(fcount); } function formatCount(i:int):String { var fraction:int = i % 100; var whole:int = i / 100; return ("0000000" + whole).substr(-7, 7) + "." + (fraction < 10 ? "0" + fraction : fraction); } //LOAD XML var myXML:XML; var myLoader:URLLoader = new URLLoader(); myLoader.load(new URLRequest("time.xml")); myLoader.addEventListener(Event.COMPLETE, processXML); //PARSE XML function processXML(e:Event):void { myXML = new XML(e.target.data); trace(myXML.ROGUE.*); trace(myXML); //TEXT var text:TextField = new TextField(); text.text = myXML.TIMER.*; text.textColor = 0xFF0000; addChild(text); } RESOURCES OReilly's ActionScript 3.0 Cookbook, Chapter 12 Strings, Chapter 20 XML

    Read the article

  • Sanitizing DB inputs with XSLT

    - by azathoth
    Hello I've been looking for a method to strip my XML content of apostrophes (') like: <name> Jim O'Connor</name> since my DBMS is complaining of receiving those. By looking at the example described here, that is supposed to replace ' with '', I constructed the following script: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output omit-xml-declaration="yes" indent="yes" /> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates select="node()|@*" /> </xsl:copy> </xsl:template> <xsl:template name="sqlApostrophe"> <xsl:param name="string" /> <xsl:variable name="apostrophe">'</xsl:variable> <xsl:choose> <xsl:when test="contains($string,$apostrophe)"> <xsl:value-of select="concat(substring-before($string,$apostrophe), $apostrophe,$apostrophe)" disable-output-escaping="yes" /> <xsl:call-template name="sqlApostrophe"> <xsl:with-param name="string" select="substring-after($string,$apostrophe)" /> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$string" disable-output-escaping="yes" /> </xsl:otherwise> </xsl:choose> </xsl:template> <xsl:template match="node()|@*"> <xsl:apply-templates name="sqlApostrophe"/> </xsl:template> </xsl:stylesheet> However, the processor isn't accepting it. What am I missing here? Is there a better way to get rid of the apostrophes? Perhaps another approach for sanitizing DB inputs by using XSLT? Thanks for your help

    Read the article

  • mysql: can't set max_allowed_package to anything grater than 16MB

    - by sas
    I'm not sure if this is the right place to post these kind of questions, if it's not so, please (politely) let me know... :-) I need to save files greater than 16MB on a mysql database from a php site... I've already changed the c:\xampp\mysql\bin\my.cnf and set max_allowed_packet to 16 MB, and everything worked fine then I set it to 32 MB but there´s no way I can handle a file bigger than 16 MB I get the following error: 'MySQL server has gone away' (the same error I had when max_allowed_packet was set to 1MB) there must be some other setting that doesn´t allow me to handle files bigger than 16MB maybe the php client, I guess, but I don't know where to edit it this is the code I'm running when file.txt is smaller than 16.776.192 bytes long, it works fine, but if file.txt has 16.777.216 bytes i get the aforementioned error oh, and the field download.content is a longblob... $file = 'file.txt'; $file_handle = fopen( $file, 'r' ); $content = fread( $file_handle, filesize( $file ) ); fclose( $file_handle ); db_execute( 'truncate table download', true ); $sql = "insert into download( code, title, name, description, original_name, mime_type, size, content, user_insert_id, date_insert, user_update_id, date_update ) values ( 'new file', 'new file', 'sas.jpg', 'new file', '$file', 'mime', " . filesize( $file ) . ", '" . addslashes( $content ) . "', 0, " . db_char_to_sql( now_char(), 'datetime' ) . ", 0, " . db_char_to_sql( now_char(), 'datetime' ) . " )"; db_execute( $sql, true ); (the db_execute funcion just opens the connections and executes the sql stuff) running on windows XP sp2 server version: 5.0.67-community PHP Version 4.4.9 mysql client API version: 3.23.49 using: ApacheFriends XAMPP (Basispaket) version 1.6.8 that comes with + Apache 2.2.9 + MySQL 5.0.67 (Community Server) + PHP 5.2.6 + PHP 4.4.9 + PEAR + phpMyAdmin 2.11.9.2 ... this is part of the content of c:\xampp\mysql\bin\my.cnf # The MySQL server [mysqld] port= 3306 socket= "C:/xampp/mysql/mysql.sock" basedir="C:/xampp/mysql" tmpdir="C:/xampp/tmp" datadir="C:/xampp/mysql/data" skip-locking key_buffer = 16M # max_allowed_packet = 1M max_allowed_packet = 32M table_cache = 128 sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M

    Read the article

  • check if directory exists c#

    - by Ant
    I am trying to see if a directory exists based on an input field from the user. When the user types in the path, I want to check if the path actually exists. I have some c# code already. It returns 1 for any local path, but always returns 0 when I am checking a network path. static string checkValidPath(string path) { //Insert your code that runs under the security context of the authenticating user here. using (ImpersonateUser user = new ImpersonateUser(user, "", password)) { //DirectoryInfo d = new DirectoryInfo(quotelessPath); bool doesExist = Directory.Exists(path); //if (d.Exists) if(doesExist) { user.Dispose(); return "1"; } else { user.Dispose(); return "0"; } } } public class ImpersonateUser : IDisposable { [DllImport("advapi32.dll", SetLastError = true)] private static extern bool LogonUser(string lpszUsername, string lpszDomain, string lpszPassword, int dwLogonType, int dwLogonProvider, out IntPtr phToken); [DllImport("kernel32", SetLastError = true)] private static extern bool CloseHandle(IntPtr hObject); private IntPtr userHandle = IntPtr.Zero; private WindowsImpersonationContext impersonationContext; public ImpersonateUser(string user, string domain, string password) { if (!string.IsNullOrEmpty(user)) { // Call LogonUser to get a token for the user bool loggedOn = LogonUser(user, domain, password, 9 /*(int)LogonType.LOGON32_LOGON_NEW_CREDENTIALS*/, 3 /*(int)LogonProvider.LOGON32_PROVIDER_WINNT50*/, out userHandle); if (!loggedOn) throw new Win32Exception(Marshal.GetLastWin32Error()); // Begin impersonating the user impersonationContext = WindowsIdentity.Impersonate(userHandle); } } public void Dispose() { if (userHandle != IntPtr.Zero) CloseHandle(userHandle); if (impersonationContext != null) impersonationContext.Undo(); } } Any help is appreciated. Thanks! EDIT 3: updated code to use BrokenGlass's impersonation functions. However, I need to initialize "password" to something... EDIT 2: I updated the code to try and use impersonation as suggested below. It still fails everytime. I assume I am using impersonation improperly... EDIT: As requested by ChrisF, here is the function that calls the checkValidPath function. Frontend aspx file... $.get('processor.ashx', { a: '7', path: x }, function(o) { alert(o); if (o=="0") { $("#outputPathDivValid").dialog({ title: 'Output Path is not valid! Please enter a path that exists!', width: 500, modal: true, resizable: false, buttons: { 'Close': function() { $(this).dialog('close'); } } }); } }); Backend ashx file... public void ProcessRequest (HttpContext context) { context.Response.Cache.SetExpires(DateTime.Now); string sSid = context.Request["sid"]; switch (context.Request["a"]) {//a bunch of case statements here... case "7": context.Response.Write(checkValidPath(context.Request["path"].ToString())); break;

    Read the article

  • Can an asynchronously fired event run synchronously on a form?

    - by cyclotis04
    [VS 2010 Beta with .Net Framework 3.5] I've written a C# component to asynchronously monitor a socket and raise events when data is received. I set the VB form to show message boxes when the event is raised. What I've noticed is that when the component raises the event synchronously, the message box blocks the component code and locks the form until the user closes the message. When it's raised asynchronously, it neither blocks the code, nor locks the form. What I want is a way to raise an event in such a way that it does not block the code, but is called on the same thread as the form (so that it locks the form until the user selects an option.) Can you help me out? Thanks. [Component] using System; using System.Threading; using System.ComponentModel; namespace mySpace { public delegate void SyncEventHandler(object sender, SyncEventArgs e); public delegate void AsyncEventHandler(object sender, AsyncEventArgs e); public class myClass { readonly object syncEventLock = new object(); readonly object asyncEventLock = new object(); SyncEventHandler syncEvent; AsyncEventHandler asyncEvent; private delegate void WorkerDelegate(string strParam, int intParam); public void DoWork(string strParam, int intParam) { OnSyncEvent(new SyncEventArgs()); AsyncOperation asyncOp = AsyncOperationManager.CreateOperation(null); WorkerDelegate delWorker = new WorkerDelegate(ClientWorker); IAsyncResult result = delWorker.BeginInvoke(strParam, intParam, null, null); } private void ClientWorker(string strParam, int intParam) { Thread.Sleep(2000); OnAsyncEvent(new AsyncEventArgs()); OnAsyncEvent(new AsyncEventArgs()); } public event SyncEventHandler SyncEvent { add { lock (syncEventLock) syncEvent += value; } remove { lock (syncEventLock) syncEvent -= value; } } public event AsyncEventHandler AsyncEvent { add { lock (asyncEventLock) asyncEvent += value; } remove { lock (asyncEventLock) asyncEvent -= value; } } protected void OnSyncEvent(SyncEventArgs e) { SyncEventHandler handler; lock (syncEventLock) handler = syncEvent; if (handler != null) handler(this, e, null, null); // Blocks and locks //if (handler != null) handler.BeginInvoke(this, e, null, null); // Neither blocks nor locks } protected void OnAsyncEvent(AsyncEventArgs e) { AsyncEventHandler handler; lock (asyncEventLock) handler = asyncEvent; //if (handler != null) handler(this, e, null, null); // Blocks and locks if (handler != null) handler.BeginInvoke(this, e, null, null); // Neither blocks nor locks } } } [Form] Imports mySpace Public Class Form1 Public WithEvents component As New mySpace.myClass() Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click component.DoWork("String", 1) End Sub Private Sub component_SyncEvent(ByVal sender As Object, ByVal e As pbxapi.SyncEventArgs) Handles component.SyncEvent MessageBox.Show("Synchronous event", "Raised:", MessageBoxButtons.OK) End Sub Private Sub component_AsyncEvent(ByVal sender As Object, ByVal e As pbxapi.AsyncEventArgs) Handles component.AsyncEvent MessageBox.Show("Asynchronous event", "Raised:", MessageBoxButtons.OK) End Sub End Class

    Read the article

  • PHP running as a FastCGI application (php-cgi) - how to issue concurrent requests?

    - by Sbm007
    Some background information: I'm writing my own webserver in Java and a couple of days ago I asked on SO how exactly Apache interfaces with PHP, so I can implement PHP support. I learnt that FastCGI is the best approach (since mod_php is not an option). So I have looked at the FastCGI protocol specification and have managed to write a working FastCGI wrapper for my server. I have tested phpinfo() and it works, in fact all PHP functions seem to work just fine (posting data, sessions, date/time, etc etc). My webserver is able to serve requests concurrently (ie user1 can retrieve file1.html at the same time as user2 requesting some_large_binary_file.zip), it does this by spawning a new Java thread for each user request (terminating when completed or user connection with client is cancelled). However, it cannot deal with 2 (or more) FastCGI requests at the same time. What it does is, it queues them up, so when request 1 is completed immediately thereafter it starts processing request 2. I tested this with 2 PHP pages, one contains sleep(10) and the other phpinfo(). How would I go about dealing with multiple requests as I know it can be done (PHP under IIS runs as FastCGI and it can deal with multiple requests just fine). Some more info: I am coding under windows and my batch file used to execute php-cgi.exe contains: set PHP_FCGI_CHILDREN=8 set PHP_FCGI_MAX_REQUESTS=500 php-cgi.exe -b 9000 But it does not spawn 8 children, the service simply terminates after 500 requests. I have done research and from Wikipedia: Processing of multiple requests simultaneously is achieved either by using a single connection with internal multiplexing (ie. multiple requests over a single connection) and/or by using multiple connections Now clearly the multiple connections isn't working for me, as everytime a client requests something that involves FastCGI it creates a new socket to the FastCGI application, but it does not work concurrently (it queues them up instead). I know that internal multiplexing of FastCGI requests under the same connection is accomplished by issuing each unique FastCGI request with a different request ID. (also see the last 3 paragraphs of 'The Communication Protocol' heading in this article). I have not tested this, but how would I go about implementing that? I take it I need some kind of FastCGI Java thread which contains a Map of some sort and a static function which I can use to add requests to. Then in the Thread's run() function it would have a while loop and for every cycle it would check whether the Map contains new requests, if so it would assign them a request ID and write them to the FastCGI stream. And then wait for input etc etc, As you can see this becomes too complicated. Does anyone know the correct way of doing this? Or any thoughts at all? Thanks very much. Note, if required I can supply the code for my FastCGI wrapper.

    Read the article

  • Looping through array in PHP to post several multipart form-data

    - by Léon Pelletier
    I'm trying in an asp web application to code a function that would loop through a list of files in a multiple upload form and send them one by one. Is this something that can be done in ASP? Because I've read some posts about how to attach several files together, but saw nothing about looping through the files. I can easily imagine it in C# via HttpWebRequest or with socket, but in php, I guess there are already function designed to handle it? // This is false/pseudo-code :) for (int index = 0; index < number_of_files; index++) { postfile(file[index]); } And in each iteration, it should send a multipart form-data POST. postfile(TheFileInfos) should make a POST like it: POST /afs.aspx?fn=upload HTTP/1.1 [Header stuff] Content-Type: multipart/form-data; boundary=----------Ef1Ef1cH2Ij5GI3ae0gL6KM7GI3GI3 [Header stuff] ------------Ef1Ef1cH2Ij5GI3ae0gL6KM7GI3GI3 Content-Disposition: form-data; name="Filename" myimage1.png ------------Ef1Ef1cH2Ij5GI3ae0gL6KM7GI3GI3 Content-Disposition: form-data; name="fileid" 58e21ede4ead43a5201206101806420000007667212251 ------------Ef1Ef1cH2Ij5GI3ae0gL6KM7GI3GI3 Content-Disposition: form-data; name="Filedata"; filename="myimage1.png" Content-Type: application/octet-stream [Octet Stream] [Edit] I'll try it: <html> <head> <title>Untitled Document</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> </head> <body> <form name="form1" enctype="multipart/form-data" method="post" action="processFiles.php"> <p> <? // start of dynamic form $uploadNeed = $_POST['uploadNeed']; for($x=0;$x<$uploadNeed;$x++){ ?> <input name="uploadFile<? echo $x;?>" type="file" id="uploadFile<? echo $x;?>"> </p> <? // end of for loop } ?> <p><input name="uploadNeed" type="hidden" value="<? echo $uploadNeed;?>"> <input type="submit" name="Submit" value="Submit"> </p> </form> </body> </html>

    Read the article

  • XmlSerializer.Deserialize blocks over NetworkStream

    - by Luca
    I'm trying to sends XML serializable objects over a network stream. I've already used this on an UDP broadcast server, where it receive UDP messages from the local network. Here a snippet of the server side: while (mServiceStopFlag == false) { if (mSocket.Available > 0) { IPEndPoint ipEndPoint = new IPEndPoint(IPAddress.Any, DiscoveryPort); byte[] bData; // Receive discovery message bData = mSocket.Receive(ref ipEndPoint); // Handle discovery message HandleDiscoveryMessage(ipEndPoint.Address, bData); ... Instead this is the client side: IPEndPoint ipEndPoint = new IPEndPoint(IPAddress.Broadcast, DiscoveryPort); MemoryStream mStream = new MemoryStream(); byte[] bData; // Create broadcast UDP server mSocket = new UdpClient(); mSocket.EnableBroadcast = true; // Create datagram data foreach (NetService s in ctx.Services) XmlHelper.SerializeClass<NetService>(mStream, s); bData = mStream.GetBuffer(); // Notify the services while (mServiceStopFlag == false) { mSocket.Send(bData, (int)mStream.Length, ipEndPoint); Thread.Sleep(DefaultServiceLatency); } It works very fine. But now i'me trying to get the same result, but on a TcpClient socket, but the using directly an XMLSerializer instance: On server side: TcpClient sSocket = k.Key; ServiceContext sContext = k.Value; Message msg = new Message(); while (sSocket.Connected == true) { if (sSocket.Available > 0) { StreamReader tr = new StreamReader(sSocket.GetStream()); msg = (Message)mXmlSerialize.Deserialize(tr); // Handle message msg = sContext.Handler(msg); // Reply with another message if (msg != null) mXmlSerialize.Serialize(sSocket.GetStream(), msg); } else Thread.Sleep(40); } And on client side: NetworkStream mSocketStream; Message rMessage; // Network stream mSocketStream = mSocket.GetStream(); // Send the message mXmlSerialize.Serialize(mSocketStream, msg); // Receive the answer rMessage = (Message)mXmlSerialize.Deserialize(mSocketStream); return (rMessage); The data is sent (Available property is greater then 0), but the method XmlSerialize.Deserialize (which should deserialize the Message class) blocks. What am I missing?

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • BlackBerry OS 7.1 secured TLS connection is closed after very short time

    - by MrVincenzo
    To make a long story short: Same client-server configuration, same network topology, same device (Bold 9900) - works perfectly well on OS 7.0 but doesn't work as expected on OS 7.1 and the secured tls connection is being closed by the server after a very short time. My application opens a secured tls connection to a server. The connection is kept alive by a application layer keep-alive mechanism and remains open until the client closes it. Attached is a simplified version of the actual code that opens connection and reads from the socket. The code works perfectly on OS 5.0-7.0 but doesn't work as expected on OS 7.1. When running on OS 7.1, the blocking read() returns with -1 (end of the stream has been reached) after very short time (10-45 seconds). For OS 5.0-7.0 the call to read() remains blocking until next data arrives and the connection is never closed by the server. Connection connection = Connector.open(connectionString); connInputStream = connection.openInputStream(); while (true) { try { retVal = connInputStream.read(); if (-1 == retVal) { break; // end of stream has been reached } } catch (Exception e ) { // do error handling } // data read from stream is handled here } UPDATE 1: Apparently, the problem appears only when I use secured tls connection (either using mobile network or WiFi) on OS 7.1. Everything works as expected when opening a non secured connection on OS 7.1. For tls on mobile networks I use the following connection string: connectionString = "tls://someipaddress:443;deviceside=false;ConnectionType=mds-public;EndToEndDesired"; For tls on Wifi I use the following connection string: connectionString = "tls://someipaddress:443;deviceside=true;interface=wifi;EndToEndRequired" UPDATE 2: The connection is never idle. I am constantly receiving and sending data on it. The issue appears both when using mobile connection and WiFi. The issue appears both on real OS 7.1 devices and simulators. I am starting to suspect that it is somehow related either to the connection string I am using or to the tls handshake. UPDATE 3: According to Wireshark's captures that I made with the OS 7.1 simulator, the secured tls connection is being closed by the server (client receives FIN). For the moment I don't have the server's private key therefore I unable to debug the tls handshake.

    Read the article

  • How to send a Java integer in four bytes to another application?

    - by user1468729
    public void routeMessage(byte[] data, int mode) { logger.debug(mode); logger.debug(Integer.toBinaryString(mode)); byte[] message = new byte[8]; ByteBuffer byteBuffer = ByteBuffer.allocate(4); ByteArrayOutputStream baoStream = new ByteArrayOutputStream(); DataOutputStream doStream = new DataOutputStream(baoStream); try { doStream.writeInt(mode); } catch (IOException e) { logger.debug("Error converting mode from integer to bytes.", e); return; } byte [] bytes = baoStream.toByteArray(); bytes[0] = (byte)((mode >>> 24) & 0x000000ff); bytes[1] = (byte)((mode >>> 16) & 0x000000ff); bytes[2] = (byte)((mode >>> 8) & 0x00000ff); bytes[3] = (byte)(mode & 0x000000ff); //bytes = byteBuffer.array(); for (byte b : bytes) { logger.debug(b); } for (int i = 0; i < 4; i++) { //byte tmp = (byte)(mode >> (32 - ((i + 1) * 8))); message[i] = bytes[i]; logger.debug("mode, " + i + ": " + Integer.toBinaryString(message[i])); message[i + 4] = data[i]; } broker.routeMessage(message); } I've tried different ways (as you can see from the commented code) to convert the mode to four bytes to send it via a socket to another application. It works well with integers up to 127 and then again with integers over 256. I believe it has something to do with Java types being signed but don't seem to get it to work. Here are some examples of what the program prints. 127 1111111 0 0 0 127 mode, 0: 0 mode, 1: 0 mode, 2: 0 mode, 3: 1111111 128 10000000 0 0 0 -128 mode, 0: 0 mode, 1: 0 mode, 2: 0 mode, 3: 11111111111111111111111110000000 211 11010011 0 0 0 -45 mode, 0: 0 mode, 1: 0 mode, 2: 0 mode, 3: 11111111111111111111111111010011 306 100110010 0 0 1 50 mode, 0: 0 mode, 1: 0 mode, 2: 1 mode, 3: 110010 How is it suddenly possible for a byte to store 32 bits? How could I fix this?

    Read the article

  • What is GC holes?

    - by tianyi
    I wrote a long TCP connection socket server in C#. Spike in memory in my server happens. I used dotNet Memory Profiler(a tool) to detect where the memory leaks. Memory Profiler indicates the private heap is huge, and the memory is something like below(the number is not real,what I want to show is the GC0 and GC2's Holes are very very huge, the data size is normal): Managed heaps - 1,500,000KB Normal heap - 1400,000KB Generation #0 - 600,000KB Data - 100,000KB "Holes" - 500,000KB Generation #1 - xxKB Data - 0KB "Holes" - xKB Generation #2 - xxxxxxxxxxxxxKB Data - 100,000KB "Holes" - 700,000KB Large heap - 131072KB Large heap - 83KB Overhead/unused - 130989KB Overhead - 0KB Howerver, what is GC hole? I read an article about the hole: http://kaushalp.blogspot.com/2007/04/what-is-gc-hole-and-how-to-create-gc.html The author said : The code snippet below is the simplest way to introduce a GC hole into the system. //OBJECTREF is a typedef for Object*. { PointerTable *pTBL = o_pObjectClass->GetPointerTable(); OBJECTREF aObj = AllocateObjectMemory(pTBL); OBJECTREF bObj = AllocateObjectMemory(pTBL); //WRONG!!! “aObj” may point to garbage if the second //“AllocateObjectMemory” triggered a GC. DoSomething (aOb, bObj); } All it does is allocate two managed objects, and then does something with them both. This code compiles fine, and if you run simple pre-checkin tests, it will probably “work.” But this code will crash eventually. Why? If the second call to “AllocateObjectMemory” triggers a GC, that GC discards the object instance you just assigned to “aObj”. This code, like all C++ code inside the CLR, is compiled by a non-managed compiler and the GC cannot know that “aObj” holds a root reference to an object you want kept live. ======================================================================== I can't understand what he explained. Does the sample mean aObj becomes a wild pointer after GC? Is it mean { aObj = (*aObj)malloc(sizeof(object)); free(aObj); function(aObj);? } ? I hope somebody can explain it.

    Read the article

  • Connection between Properties of Entities in Data Oriented Design

    - by sharethis
    I want to start with an example illustrating my question. The following way it is done in the most games. class car { vec3 position; vec3 rotation; mesh model; imge texture; void move(); // modify position and rotation void draw(); // use model, texture, ... }; vector<car> cars; for(auto i = cars.begin(); i != cars.end(); ++i) { i->move(); i->draw(); } Data oriented design means to process the same calculation on the hole batch of data at once. This way it takes more advantage out of the processor cache. struct movedata { vec3 position; vec3 rotation; }; struct drawdata { mesh model; imge texture; }; vector<movedata> movedatas; vector<drawdata> drawdatas; for(auto i = movedatas.begin(); i != movedatas.end(); ++i) { // modify position and rotation } for(auto i = drawdatas.begin(); i != drawdatas.end(); ++i) { // use model, texture, ... } But there comes a point where you need to find other properties according to an entity. For example if the car crashes, I do not need the drawdata and the movedata any more. So I need to delete the entries of this entity in all vectors. The entries are not linked by code. So my question is the following. How are properties of the same entity conceptually linked in a data oriented design?

    Read the article

  • C++ Vector vs Array (Time)

    - by vsha041
    I have got here two programs with me, both are doing exactly the same task. They are just setting an boolean array / vector to the value true. The program using vector takes 27 seconds to run whereas the program involving array with 5 times greater size takes less than 1 s. I would like to know the exact reason as to why there is such a major difference ? Are vectors really that inefficient ? Program using vectors #include <iostream> #include <vector> #include <ctime> using namespace std; int main(){ const int size = 2000; time_t start, end; time(&start); vector<bool> v(size); for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - 27 seconds Program using Array #include <iostream> #include <ctime> using namespace std; int main(){ const int size = 10000; // 5 times more size time_t start, end; time(&start); bool v[size]; for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - < 1 seconds Platform - Visual Studio 2008 OS - Windows Vista 32 bit SP 1 Processor Intel(R) Pentium(R) Dual CPU T2370 @ 1.73GHz Memory (RAM) 1.00 GB Thanks Amare

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >