Search Results

Search found 177198 results on 7088 pages for 'not programming'.

Page 278/7088 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • Spikes in Socket Performance

    - by Harun Prasad
    We are facing random spikes in high throughput transaction processing system using sockets for IPC. Below is the setup used for the run: The client opens and closes new connection for every transaction, and there are 4 exchanges between the server and the client. We have disabled the TIME_WAIT, by setting the socket linger (SO_LINGER) option via getsockopt as we thought that the spikes were caused due to the sockets waiting in TIME_WAIT. There is no processing done for the transaction. Only messages are passed. OS used Centos 5.4 The average round trip time is around 3 milli seconds, but some times the round trip time ranges from 100 milli seconds to couple of seconds. Steps used for Execution and Measurement and output Starting the server $ python sockServerLinger.py /dev/null & Starting the client to post 1 million transactions to the server. And logs the time for a transaction in the client.log file. $ python sockClient.py 1000000 client.log Once the execution finishes the following command will show the execution time greater than 100 milliseconds in the format <line_number>:<execution_time>. $ grep -n "0.[1-9]" client.log | less Below is the example code for Server and Client. Server # File: sockServerLinger.py import socket, traceback,time import struct host = '' port = 9999 l_onoff = 1 l_linger = 0 lingeropt = struct.pack('ii', l_onoff, l_linger) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, lingeropt) s.bind((host, port)) s.listen(1) while 1: try: clientsock, clientaddr = s.accept() print "Got connection from", clientsock.getpeername() data = clientsock.recv(1024*1024*10) #print "asdasd",data numsent=clientsock.send(data) data1 = clientsock.recv(1024*1024*10) numsent=clientsock.send(data) ret = 1 while(ret>0): data1 = clientsock.recv(1024*1024*10) ret = len(data) clientsock.close() except KeyboardInterrupt: raise except: print traceback.print_exc() continue Client # File: sockClient.py import socket, traceback,sys import time i = 0 while 1: try: st = time.time() s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) while (s.connect_ex(('127.0.0.1',9999)) != 0): continue numsent=s.send("asd"*1000) response = s.recv(6000) numsent=s.send("asd"*1000) response = s.recv(6000) i+=1 if i == int(sys.argv[1]): break except KeyboardInterrupt: raise except: print "in exec:::::::::::::",traceback.print_exc() continue print time.time() -st

    Read the article

  • Setting the primary key of a object type table in Oracle

    - by Rek
    I have two Oracle questions. How can I set the primary key of a table when the table is made up of an object type? e.g. CREATE TABLE object_names OF object_type I have created a Varray type, CREATE TYPE MULTI_TAG AS VARRAY(10) OF VARCHAR(10); but when I try to do SELECT p.tags.count FROM pg_photos p; I get an invalid identifier error on the "count" part. p.tags is a MULTI_TAG, how can I get the number of elements in the MULTI_TAG?

    Read the article

  • How to find cause and of the SocketException with message that an established connection was aborted

    - by cdpnet
    Hi All, I know the similar question may have been asked many times, but I want to represent the behavior I'm seeing and find if somebody can help predict the cause of this. I am writing a windows service which connects to other windows service over TCP. There are 100 user entities of this, and 5 connections per each. These users perform their tasks using their individual connections. The application goes on withough seeing this problem for 1 or 2 days. Or sometimes show the problem right after starting (-rarely). The best run I had was like 4 to 5 days without showing this exception. And after that application died or I had to stop it for various reasons. I want to know what can be causing this? Here is the stacktrace. System.IO.IOException: Unable to write data to the transport connection: An established connection was aborted by the software in your host machine. ---> System.Net.Sockets.SocketException: An established connection was aborted by the software in your host machine at System.Net.Sockets.Socket.Send(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Security._SslStream.StartWriting(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslStream.Write(Byte[] buffer, Int32 offset, Int32 count)

    Read the article

  • SIMPLE OpenSSL RSA Encryption in C/C++ is causing me headaches

    - by Josh
    Hey guys, I'm having some trouble figuring out how to do this. Basically I just want a client and server to be able to send each other encrypted messages. This is going to be incredibly insecure because I'm trying to figure this all out so I might as well start at the ground floor. So far I've got all the keys working but encryption/decryption is giving me hell. I'll start by saying I am using C++ but most of these functions require C strings so whatever I'm doing may be causing problems. Note that on the client side I receive the following error in regards to decryption. error:04065072:rsa routines:RSA_EAY_PRIVATE_DECRYPT:padding check failed I don't really understand how padding works so I don't know how to fix it. Anywho here are the relevant variables on each side followed by the code. Client: RSA *myKey; // Loaded with private key // The below will hold the decrypted message unsigned char* decrypted = (unsigned char*) malloc(RSA_size(myKey)); /* The below holds the encrypted string received over the network. Originally held in a C-string but C strings never work for me and scare me so I put it in a C++ string */ string encrypted; // The reinterpret_cast line was to get rid of an error message. // Maybe the cause of one of my problems? if(RSA_private_decrypt(sizeof(encrypted.c_str()), reinterpret_cast<const unsigned char*>(encrypted.c_str()), decrypted, myKey, RSA_PKCS1_OAEP_PADDING)==-1) { cout << "Private decryption failed" << endl; ERR_error_string(ERR_peek_last_error(), errBuf); printf("Error: %s\n", errBuf); free(decrypted); exit(1); } Server: RSA *pkey; // Holds the client's public key string key; // Holds a session key I want to encrypt and send //The below will hold the encrypted message unsigned char *encrypted = (unsigned char*)malloc(RSA_size(pkey)); // The reinterpret_cast line was to get rid of an error message. // Maybe the cause of one of my problems? if(RSA_public_encrypt(sizeof(key.c_str()), reinterpret_cast<const unsigned char*>(key.c_str()), encrypted, pkey, RSA_PKCS1_OAEP_PADDING)==-1) { cout << "Public encryption failed" << endl; ERR_error_string(ERR_peek_last_error(), errBuf); printf("Error: %s\n", errBuf); free(encrypted); exit(1); } Let me once again state, in case I didn't before, that I know my code sucks but I'm just trying to establish a framework for understanding this. I'm sorry if this offends you veteran coders. Thanks in advance for any help you guys can provide!

    Read the article

  • Best Cross Platform Networking Framework For iPhone Dev

    - by Carmen
    I am looking to write a client/server application that will run on both iPhone, OS X and Windows. What are the best solutions for networking that will work on all 3 platforms? I have looked into Qt and it doesn't look like it has support for iPhone. I have also looked at boost, and that looks like it can be compiled for the iPhone. However I was hoping for a somewhat higher level framework like what Qt has to offer.

    Read the article

  • Is CSS turing complete?

    - by Adam Davis
    CSS isn't, insofar as I know, Turing complete. But my knowledge of CSS is very limited. Is CSS Turing complete? Are any of the existing draft or committees considering language features that might enable Turing completeness if it isn't right now?

    Read the article

  • Visual Studio T4 vs CodeSmith

    - by Jake
    I've been using CodeSmith for the past 2 years and love what it does for me. However, I also know about T4 which is built in to Visual Studio and can do some pretty cool stuff too. Based on conversations with friends T4 in VS2010 T4 is going to be even better. So the question is: do I keep riding the CodeSmith bus or is it time to start converting all of my templates to T4?

    Read the article

  • question about pcap

    - by scatman
    hi, i have to do a sniffer as an assignment for the security course. i am using c and the pcap library. i got everything working well (since i got a code from the internet and changed it). but i have some questions about the code. u_int ip_len = (ih->ver_ihl & 0xf) * 4; ih is of type ip_header, and its currently pointing the to ip header in the packet. ver_ihl gives the version of the ip i can't figure out what is: & 0xf) * 4; any help?

    Read the article

  • Create ntp time stamp from gettimeofday

    - by krunk
    I need to calculate an ntp time stamp using gettimeofday. Below is how I've done it with comments on method. Look good to you guys? (minus error checking). Also, here's a codepad link. #include <unistd.h> #include <sys/time.h> const unsigned long EPOCH = 2208988800UL; // delta between epoch time and ntp time const double NTP_SCALE_FRAC = 4294967295.0; // maximum value of the ntp fractional part int main() { struct timeval tv; uint64_t ntp_time; uint64_t tv_ntp; double tv_usecs; gettimeofday(&tv, NULL); tv_ntp = tv.tv_sec + EPOCH; // convert tv_usec to a fraction of a second // next, we multiply this fraction times the NTP_SCALE_FRAC, which represents // the maximum value of the fraction until it rolls over to one. Thus, // .05 seconds is represented in NTP as (.05 * NTP_SCALE_FRAC) tv_usecs = (tv.tv_usec * 1e-6) * NTP_SCALE_FRAC; // next we take the tv_ntp seconds value and shift it 32 bits to the left. This puts the // seconds in the proper location for NTP time stamps. I recognize this method has an // overflow hazard if used after around the year 2106 // Next we do a bitwise AND with the tv_usecs cast as a uin32_t, dropping the fractional // part ntp_time = ((tv_ntp << 32) & (uint32_t)tv_usecs); }

    Read the article

  • Scalaz: request for use case

    - by oxbow_lakes
    This question isn't meant as flame-bait! As it might be apparent, I've been looking at Scalaz recently. I'm trying to understand why I need some of the functionality that the library provides. Here's something: import scalaz._ import Scalaz._ type NEL[A] = NonEmptyList[A] val NEL = NonEmptyList I put some println statements in my functions to see what was going on (aside: what would I have done if I was trying to avoid side effects like that?). My functions are: val f: NEL[Int] => String = (l: NEL[Int]) => {println("f: " + l); l.toString |+| "X" } val g: NEL[String] => BigInt = (l: NEL[String]) => {println("g: " + l); BigInt(l.map(_.length).sum) } Then I combine them via a cokleisli and pass in a NEL[Int] val k = cokleisli(f) =>= cokleisli(g) println("RES: " + k( NEL(1, 2, 3) )) What does this print? f: NonEmptyList(1, 2, 3) f: NonEmptyList(2, 3) f: NonEmptyList(3) g: NonEmptyList(NonEmptyList(1, 2, 3)X, NonEmptyList(2, 3)X, NonEmptyList(3)X) RES: 57 The RES value is the character count of the (String) elements in the final NEL. Two things occur to me: How could I have known that my NEL was going to be reduced in this manner from the method signatures involved? (I wasn't expecting the result at all) What is the point of this? Can a reasonably simple and easy-to-follow use case be distilled for me? This question is a thinly-veiled plea for some lovely person like retronym to explain how this powerful library actually works.

    Read the article

  • Generic Data Structure Description Language

    - by Jon Purdy
    I am wondering whether there exists any declarative language for arbitrarily describing the format and semantics of a data structure, that can be compiled to a specific implementation of that structure in any of a set of target languages. That is, something like a generic data definition language but geared toward describing arbitrary data structures such as vectors, lists, trees, etc., and the semantics of operations on those structures. I ask because I had an idea for a feasible implementation of this concept, and I'm just wondering whether it's worth it, and, consequently, whether it's been done before. Another, slightly more abstract question: is there any real difference between the normative specification of a data structure (what it does) and its implementation (how it does it)?

    Read the article

  • How to delete columns?

    - by Joey jie
    Hi all, I have generated the following text file below: fe120b99164f151b28bf86afa6389b22 -rw-r--r-- 1 joey joey 186 2010-03-14 19:26 Descript.txt 41705ea936cfc653f273b5454c1cdde6 -rw-r--r-- 1 joey joey 30 2010-03-14 20:29 listof.txt 0e25cca3222d32fff43563465af03340 -rw-r--r-- 1 joey joey 28 2010-03-14 23:35 sedexample.txt d41d8cd98f00b204e9800998ecf8427e -rw-r--r-- 1 joey joey 0 2010-02-16 15:11 test1.txt d41d8cd98f00b204e9800998ecf8427e -rw-r--r-- 1 joey joey 0 2010-02-16 15:11 test2.txt d41d8cd98f00b204e9800998ecf8427e -rw-r--r-- 1 joey joey 0 2010-02-16 15:11 test3.txt d41d8cd98f00b204e9800998ecf8427e -rw-r--r-- 1 joey joey 0 2010-02-16 15:11 test4.txt d41d8cd98f00b204e9800998ecf8427e -rw-r--r-- 1 joey joey 0 2010-02-16 15:11 test5.txt d41d8cd98f00b204e9800998ecf8427e -rw-r--r-- 1 joey joey 0 2010-02-16 15:11 test6.txt f5c7f1856249d0526be10df5bd5b895a -rw-r--r-- 1 joey joey 26 2010-03-13 14:13 testingfile.txt d41d8cd98f00b204e9800998ecf8427e -rw-r--r-- 1 joey joey 0 2010-03-15 00:28 uniquelist.txt Basically I would like to get rid of the access, amount column, user and group columns. In other words, I want rid of columns 3,4,5. I have tried using cut to keep the columns that I do want and having " " as my delimiter however because of the file size figures, it messes up using "space" as a delimiter. Any advice would be much appreciated! Oh just to add, I would like to save the output as another text file. Many thanks!

    Read the article

  • How to send raw data over a network?

    - by youllknow
    Hi everyone! I've same data stored in a byte-array. The data contains a IPv4 packet (which contains a udp-packet). I want to send these array raw over the network using C# (preferred) or C++. I don't want to use C#'s udp-client for example. Does anyone know how to perform this? Sorry for my bad English and thanks for your help in advance!!!

    Read the article

  • ansi-c fscanf problem

    - by mongoose
    hi i read the file as follows fscanf(fp,"%f %f %f",&*(p1+i), &*(p2+i), &*(p3+i)); my file's lines consists of three floating point numbers... the problem i have is that in the file let's say i have some floating points with let's say maximum of two digits after the dot. but when i ask c to print those values using different formatting, for example %lf,%.2lf,%.4lf... it starts to play with the digits... my only concern is this, if i have let's say 1343.23 in the file, then will c use this value exactly as it is in computations or it will play with the digits after the dot. if it will play, then how is it possible to make it so that it uses floating point numbers exactly as they are? for example in last case even if i ask it to print that value using %.10lf i would expect it to print only 1343.2300000000.? thanks a lot!

    Read the article

  • When is calculating or variable-reading faster?

    - by Andreas Hornig
    hi, to be honest, I don't really know what the "small green men" in my cpu and compiler do, so I sometimes would like to know :). Currently I would like to know what's faster, so that I can design my code in a more efficient way. So for example I want to calclate something at different points in my sourcecode, when will it be faster to calculate it once and store it in a variable that's read and used for the next points it's needed and when is it faster to calculate it everytime? I think it's depending on how "complex" and "long" the calculation is and how fast then cache is, where variables are stored, but I don't have any clue what's faster :). Thanks for any reply to my tiny but important question! Andreas PS: perhaps it's important to know that I code in JAVA, but it's more a genral question.

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >