Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 755/883 | < Previous Page | 751 752 753 754 755 756 757 758 759 760 761 762  | Next Page >

  • .NET Sockets Buffer Overflow No Error

    - by Michael Covelli
    I have one thread that is receiving data over a socket like this: while (sock.Connected) { // Receive Data (Block if no data) recvn = sock.Receive(recvb, 0, rlen, SocketFlags.None, out serr); if (recvn <= 0 || sock == null || !sock.Connected) { OnError("Error In Receive, recvn <= 0 || sock == null || !sock.Connected"); return; } else if (serr != SocketError.Success) { OnError("Error In Receive, serr = " + serr); return; } // Copy Data Into Tokenizer tknz.Read(recvb, recvn); // Parse Data while (tknz.MoveToNext()) { try { ParseMessageAndRaiseEvents(tknz.Buffer(), tknz.Length); } catch (System.Exception ex) { string BadMessage = ByteArrayToStringClean(tknz.Buffer(), tknz.Length); string msg = string.Format("Exception in MDWrapper Parsing Message, Ex = {0}, Msg = {1}", ex.Message, BadMessage); OnError(msg); } } } And I kept seeing occasional errors in my parsing function indicating that the message wasn't valid. At first, I thought that my tokenizer class was broken. But after logging all the incoming bytes to the tokenizer, it turns out that the raw bytes in recvb weren't a valid message. I didn't think that corrupted data like this was possible with a tcp data stream. I figured it had to be some type of buffer overflow so I set sock.ReceiveBufferSize = 1024 * 1024 * 8; and the parsing error never, ever occurs in testing (it happens often enough to replicate if I don't change the ReceiveBufferSize). But my question is: why wasn't I seeing an exception or an error state or something if the socket's internal buffer was overflowing before I changed this buffer size?

    Read the article

  • Perl cron job stays running

    - by Dylan
    I'm currently using a cron job to have a Perl script that tells my Arduino to cycle my aquaponics system and all is well, except the Perl script doesn't die as intended. Here is my cron job: */15 * * * * /home/dburke/scripts/hal/bin/main.pl cycle And below is my Perl script: #!/usr/bin/perl -w # Sample Perl script to transmit number # to Arduino then listen for the Arduino # to echo it back use strict; use Device::SerialPort; use Switch; use Time::HiRes qw ( alarm ); $|++; # Set up the serial port # 19200, 81N on the USB ftdi driver my $device = '/dev/arduino0'; # Tomoc has to use a different tty for testing #$device = '/dev/ttyS0'; my $port = new Device::SerialPort ($device) or die('Unable to open connection to device');; $port->databits(8); $port->baudrate(19200); $port->parity("none"); $port->stopbits(1); my $lastChoice = ' '; my $pid = fork(); my $signalOut; my $args = shift(@ARGV); # Parent must wait for child to exit before exiting itself on CTRL+C $SIG{'INT'} = sub { waitpid($pid,0) if $pid != 0; exit(0); }; # What child process should do if($pid == 0) { # Poll to see if any data is coming in print "\nListening...\n\n"; while (1) { my $incmsg = $port->lookfor(9); # If we get data, then print it if ($incmsg) { print "\nFrom arduino: " . $incmsg . "\n\n"; } } } # What parent process should do else { if ($args eq "cycle") { my $stop = 0; sleep(1); $SIG{ALRM} = sub { print "Expecting plant bed to be full; please check.\n"; $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2\n"; $stop = 1; }; $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1\n"; print "Waiting for plant bed to fill...\n"; alarm (420); while ($stop == 0) { sleep(2); } die "Done."; } else { sleep(1); my $choice = ' '; print "Please pick an option you'd like to use:\n"; while(1) { print " [1] Cycle [2] Relay OFF [3] Relay ON [4] Config [$lastChoice]: "; chomp($choice = <STDIN>); switch ($choice) { case /1/ { $SIG{ALRM} = sub { print "Expecting plant bed to be full; please check.\n"; $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2\n"; }; $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1\n"; print "Waiting for plant bed to fill...\n"; alarm (420); $lastChoice = $choice; } case /2/ { $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2"; $lastChoice = $choice; } case /3/ { $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1"; $lastChoice = $choice; } case /4/ { print "There is no configuration available yet. Please stab the developer."; } else { print "Please select a valid option.\n\n"; } } } } } Why wouldn't it die from the statement die "Done.";? It runs fine from the command line and also interprets the 'cycle' argument fine. When it runs in cron it runs fine, however, the process never dies and while each process doesn't continue to cycle the system it does seem to be looping in some way due to the fact that it ups my system load very quickly. If you'd like more information, just ask. EDIT: I have changed to code to: #!/usr/bin/perl -w # Sample Perl script to transmit number # to Arduino then listen for the Arduino # to echo it back use strict; use Device::SerialPort; use Switch; use Time::HiRes qw ( alarm ); $|++; # Set up the serial port # 19200, 81N on the USB ftdi driver my $device = '/dev/arduino0'; # Tomoc has to use a different tty for testing #$device = '/dev/ttyS0'; my $port = new Device::SerialPort ($device) or die('Unable to open connection to device');; $port->databits(8); $port->baudrate(19200); $port->parity("none"); $port->stopbits(1); my $lastChoice = ' '; my $signalOut; my $args = shift(@ARGV); # Parent must wait for child to exit before exiting itself on CTRL+C if ($args eq "cycle") { open (LOG, '>>log.txt'); print LOG "Cycle started.\n"; my $stop = 0; sleep(2); $SIG{ALRM} = sub { print "Expecting plant bed to be full; please check.\n"; $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2\n"; $stop = 1; }; $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1\n"; print "Waiting for plant bed to fill...\n"; print LOG "Alarm is being set.\n"; alarm (420); print LOG "Alarm is set.\n"; while ($stop == 0) { print LOG "In while-sleep loop.\n"; sleep(2); } print LOG "The loop has been escaped.\n"; die "Done."; print LOG "No one should ever see this."; } else { my $pid = fork(); $SIG{'INT'} = sub { waitpid($pid,0) if $pid != 0; exit(0); }; # What child process should do if($pid == 0) { # Poll to see if any data is coming in print "\nListening...\n\n"; while (1) { my $incmsg = $port->lookfor(9); # If we get data, then print it if ($incmsg) { print "\nFrom arduino: " . $incmsg . "\n\n"; } } } # What parent process should do else { sleep(1); my $choice = ' '; print "Please pick an option you'd like to use:\n"; while(1) { print " [1] Cycle [2] Relay OFF [3] Relay ON [4] Config [$lastChoice]: "; chomp($choice = <STDIN>); switch ($choice) { case /1/ { $SIG{ALRM} = sub { print "Expecting plant bed to be full; please check.\n"; $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2\n"; }; $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1\n"; print "Waiting for plant bed to fill...\n"; alarm (420); $lastChoice = $choice; } case /2/ { $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2"; $lastChoice = $choice; } case /3/ { $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1"; $lastChoice = $choice; } case /4/ { print "There is no configuration available yet. Please stab the developer."; } else { print "Please select a valid option.\n\n"; } } } } }

    Read the article

  • Why is Lua considered a game language?

    - by Hoffmann
    I have been learning about Lua in the past month and I'm absolutely in love with the language, but all I see around that is built with lua are games. I mean, the syntax is very simple, there is no fuss, no special meaning characters that makes code look like regex, has all the good things about a script language and integrates so painlessly with other languages like C, Java, etc. The only down-side I saw so far is the prototype based object orientation that some people do not like (or lack of OO built-in). I do not see how ruby or python are better, surely not in performance ( http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=lua&lang2=python ). I was planning on writting a web app using lua with the Kepler framework and Javascript, but the lack of other projects that use lua as a web language makes me feel a bit uneasy since this is my first try with web development. Lua is considered a kids language, most of you on stackoverflow probably only know the language because of the WoW addons. I can't really see why that is... http://lua-users.org/wiki/LuaVersusPython this link provides some insights on Lua against Python, but this is clearly biased.

    Read the article

  • GLFW - Not drawing square

    - by m00st
    I am using GLFW as GUI for OpenGL projects. I am using my red book and testing code and well the first bit of code doesn't work at all. I want to say this is a GLFW problem because I don't have this problem in JOGL. #include <iostream> #include "GL/glfw.h" #ifndef MAIN #define MAIN #include "GL/gl.h" #include "GL/glu.h" #endif using namespace std; int main() { int running = GL_TRUE; glfwInit(); if( !glfwOpenWindow( 300,300, 0,0,0,0,0,0, GLFW_WINDOW ) ) { glfwTerminate(); return 0; } while( running ) { //GL Code here glClear(GL_COLOR_BUFFER_BIT); glClearColor(0.0, 0.0, 0.0, 0.0); glColor3f(1.0, 1.0, 1.0); glOrtho(0.0, 1.0, 0.0, 1.0, -1.0, 1.0); glBegin(GL_POLYGON); glVertex3f(0.25, 0.25, 0.0); glVertex3f(0.75, 0.25, 0.0); glVertex3f(0.75, 0.75, 0.0); glVertex3f(0.25, 0.75, 0.0); glEnd(); glFlush(); glfwSwapBuffers(); // Check if ESC key was pressed or window was closed running = !glfwGetKey( GLFW_KEY_ESC ) && glfwGetWindowParam( GLFW_OPENED ); } glfwTerminate(); return 0; }

    Read the article

  • randomize array using php

    - by Suneth Kalhara
    I need to rendomise the order of follow array using PHP, i tried to use array shuffle and array_random but no luck, can anyone help me please Array ( [0] => Array ( [value] => 4 [label] => GasGas ) [1] => Array ( [value] => 3 [label] => Airoh Helmets ) [2] => Array ( [value] => 12 [label] => XCiting Trials Wear ) [3] => Array ( [value] => 11 [label] => Hebo Trials ) [4] => Array ( [value] => 10 [label] => Jitsie Products ) [5] => Array ( [value] => 9 [label] => Diadora Boots ) [6] => Array ( [value] => 8 [label] => S3 Performance ) [7] => Array ( [value] => 7 [label] => Scorpa ) [8] => Array ( [value] => 6 [label] => Inspired ) [9] => Array ( [value] => 5 [label] => Oset ) )

    Read the article

  • Very interesting problem in Compact Framework

    - by Alexander
    Hi, i have a performance problem while inserting data to sqlce.I'm reading string and making inserts to My tables.In LU_MAM table,i insert 1000 records withing 8 seconds.After Mam tables i make some inserts but my largest table is CR_MUS.When i want to insert record into CR_MUS,it takes too much time.CR_MUS has 2000 records and insert takes 35 seconds.What can be reason?I use same logic in my insert functions.Do u have any idea?I use VS 2008 sp1. Dim reader As StringReader reader = New StringReader(data) cn = New SqlCeConnection(General.ConnString) cn.Open() If myTransfer.ClearTables(cn, cmd) = True Then progress = 0 '------------------------------------------ cmd = New SqlServerCe.SqlCeCommand Dim rs As SqlCeResultSet cmd.Connection = cn cmd.CommandType = CommandType.TableDirect Dim rec As SqlCeUpdatableRecord ' name of table While reader.Peek > -1 If strerr_col = "" Then satir = reader.ReadLine() ayrac = Split(satir, "|") If ayrac(0).ToString() = "LC" Then prgsbar.Maximum = Convert.ToInt32(ayrac(1)) ElseIf ayrac(0).ToString = "PPAR" Then . If ayrac(2).ToString <> General.PMVer Then ShowWaitCursor(False) txtDurum.Text = "Wrong Version" Exit Sub End If If p_POCKET_PARAMETERS = True Then cmd.CommandText = "POCKET_PARAMETERS" txtDurum.Text = "POCKET_PARAMETERS" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_POCKET_PARAMETERS = False End If strerr_col = myVERI_AL.POCKET_PARAMETERS_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 ElseIf ayrac(0).ToString() = "MAM" Then If p_LU_MAM = True Then txtDurum.Text = "LU_MAM " cmd.CommandText = "LU_MAM" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_LU_MAM = False End If strerr_col = myVERI_AL.LU_MAM_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 ElseIf ayrac(0).ToString = "KMUS" Then If p_CR_MUS = True Then cmd.CommandText = "CR_MUS" txtDurum.Text = "CR_MUS" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_TR_KAMPANYA_MALZEME = False End If strerr_col = myVERI_AL.CR_MUS_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 end while Public Function CR_KAMPANYA_MUSTERI_I(ByVal f_Line() As String, ByRef myComm As SqlCeCommand, ByRef rs As SqlCeResultSet, ByRef rec As SqlCeUpdatableRecord) As String Try rec.SetValue(0, If(f_Line(1) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(1))) rec.SetValue(1, If(f_Line(2) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(2))) rec.SetValue(2, If(f_Line(3) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(3))) rec.SetValue(3, If(f_Line(5) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(5))) rec.SetValue(4, If(f_Line(6) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(6))) rs.Insert(rec) Catch ex As Exception strerr_col = ex.Message End Try Return strerr_col End Function

    Read the article

  • mEncrypt/Decrypt binary mp3 with mcrypt, missing mimetype

    - by Jeremy Dicaire
    I have a script that read a mp3 file and encrypt it, I want to be able to decrypt this file and convert it to base64 so it can play in html5. Key 1 will be stored on the page and static, key2 will be unique for each file, for testing I used: $key1 = md5(time()); $key2 = md5($key1.time()); Here is my encode php code : //Get file content $file = file_get_contents('test.mp3'); //Encrypt file $Encrypt = mcrypt_encrypt(MCRYPT_RIJNDAEL_256, $key1, $file, MCRYPT_MODE_CBC, $key2); $Encrypt = trim(base64_encode($Encrypt)); //Create new file $fileE = "test.mp3e"; $fileE = fopen($file64, 'w') or die("can't open file"); //Put crypted content fwrite($fileE, $Encrypt); //Close file fclose($fileE); Here is the code that doesnt work (decoded file is same size, but no mimetype): //Get file content $fileE = file_get_contents('test.mp3e'); //Decode $fileDecoded = base64_decode($fileE); //Decrypt file $Decrypt = mcrypt_decrypt(MCRYPT_RIJNDAEL_256, $key1, $fileDecoded, MCRYPT_MODE_CBC, $key2); $Decrypt = trim($Decrypt); //Create new file $file = "test.mp3"; $file = fopen($file, 'w') or die("can't open file"); //Put crypted content fwrite($file, $Decrypt); //Close file fclose($file);

    Read the article

  • Syncing data between devel/live databases in Django

    - by T. Stone
    With Django's new multi-db functionality in the development version, I've been trying to work on creating a management command that let's me synchronize the data from the live site down to a developer machine for extended testing. (Having actual data, particularly user-entered data, allows me to test a broader range of inputs.) Right now I've got a "mostly" working command. It can sync "simple" model data but the problem I'm having is that it ignores ManyToMany fields which I don't see any reason for it do so. Anyone have any ideas of either how to fix that or a better want to handle this? Should I be exporting that first query to a fixture first and then re-importing it? from django.core.management.base import LabelCommand from django.db.utils import IntegrityError from django.db import models from django.conf import settings LIVE_DATABASE_KEY = 'live' class Command(LabelCommand): help = ("Synchronizes the data between the local machine and the live server") args = "APP_NAME" label = 'application name' requires_model_validation = False can_import_settings = True def handle_label(self, label, **options): # Make sure we're running the command on a developer machine and that we've got the right settings db_settings = getattr(settings, 'DATABASES', {}) if not LIVE_DATABASE_KEY in db_settings: print 'Could not find "%s" in database settings.' % LIVE_DATABASE_KEY return if db_settings.get('default') == db_settings.get(LIVE_DATABASE_KEY): print 'Data cannot synchronize with self. This command must be run on a non-production server.' return # Fetch all models for the given app try: app = models.get_app(label) app_models = models.get_models(app) except: print "The app '%s' could not be found or models could not be loaded for it." % label for model in app_models: print 'Syncing %s.%s ...' % (model._meta.app_label, model._meta.object_name) # Query each model from the live site qs = model.objects.all().using(LIVE_DATABASE_KEY) # ...and save it to the local database for record in qs: try: record.save(using='default') except IntegrityError: # Skip as the record probably already exists pass

    Read the article

  • python-iptables: Cryptic error when allowing incoming TCP traffic on port 1234

    - by Lucas Kauffman
    I wanted to write an iptables script in Python. Rather than calling iptables itself I wanted to use the python-iptables package. However I'm having a hard time getting some basic rules setup. I wanted to use the filter chain to accept incoming TCP traffic on port 1234. So I wrote this: import iptc chain = iptc.Chain(iptc.TABLE_FILTER,"INPUT") rule = iptc.Rule() target = iptc.Target(rule,"ACCEPT") match = iptc.Match(rule,'tcp') match.dport='1234' rule.add_match(match) rule.target = target chain.insert_rule(rule) However when I run this I get this thrown back at me: Traceback (most recent call last): File "testing.py", line 9, in <module> chain.insert_rule(rule) File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1133, in insert_rule self.table.insert_entry(self.name, rbuf, position) File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1166, in new obj.refresh() File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1230, in refresh self._free() File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1224, in _free self.commit() File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1219, in commit raise IPTCError("can't commit: %s" % (self.strerror())) iptc.IPTCError: can't commit: Invalid argument Exception AttributeError: "'NoneType' object has no attribute 'get_errno'" in <bound method Table.__del__ of <iptc.Table object at 0x7fcad56cc550>> ignored Does anyone have experience with python-iptables that could enlighten on what I did wrong?

    Read the article

  • Represent multiple Null/Generic objects in an ActiveRecord association?

    - by slothbear
    I have a Casefile model that belongs_to a Doctor. In additional to all the "real" doctors, there are several generic Doctors: "self-treated", "not specified", and "removed" (it used to have a real doctor, but no longer does). I suspect there will be even more generic values in the future. I started with special "doctors" in the database, generated from seed. The generic Doctors only need to respond to the "name" and "real_doctor?" methods. This worked with one, was strained with two, and now feels completely broken. I want to change the behavior and can't figure out how to test it, a bad sign. Creating all the generic objects for testing is also trouble, including fake values to pass validation of the required Doctor attributes. The Null Object pattern works well for one generic object. The "name" method could check casefile.doctor.nil? and return "self-treated", as demonstrated by Craig Ambrose. What pattern should I use when there are multiple generic objects with very limited state?

    Read the article

  • Render action return View(); form problem

    - by Roger Rogers
    I'm new to MVC, so please bear with me. :-) I've got a strongly typed "Story" View. This View (story) can have Comments. I've created two Views (not partials) for my Comments controller "ListStoryComments" and "CreateStoryComment", which do what their names imply. These Views are included in the Story View using RenderAction, e.g.: <!-- List comments --> <h2>All Comments</h2> <% Html.RenderAction("ListStoryComments", "Comments", new { id = Model.Story.Id }); %> <!-- Create new comment --> <% Html.RenderAction("CreateStoryComment", "Comments", new { id = Model.Story.Id }); %> (I pass in the Story id in order to list related comments). All works as I hoped, except, when I post a new comment using the form, it returns the current (parent) View, but the Comments form field is still showing the last content I typed in and the ListStoryComments View isn’t updated to show the new story. Basically, the page is being loaded from cache, as if I had pressed the browser’s back button. If I press f5 it will try to repost the form. If I reload the page manually (reenter the URL in the browser's address bar), and then press f5, I will see my new content and the empty form field, which is my desired result. For completeness, my CreateStoryComment action looks like this: [HttpPost] public ActionResult CreateStoryComment([Bind(Exclude = "Id, Timestamp, ByUserId, ForUserId")]Comment commentToCreate) { try { commentToCreate.ByUserId = userGuid; commentToCreate.ForUserId = userGuid; commentToCreate.StoryId = 2; // hard-coded for testing _repository.CreateComment(commentToCreate); return View(); } catch { return View(); } }

    Read the article

  • Writing fortran robust and "modern" code

    - by Blklight
    In some scientific environments, you often cannot go without FORTRAN as most of the developers only know that idiom, and there is lot of legacy code and related experience. And frankly, there are not many other cross-platform options for high performance programming ( C++ would do the task, but the syntax, zero-starting arrays, and pointers are too much for most engineers ;-) ). I'm a C++ guy but I'm stuck with some F90 projects. So, let's assume a new project must use FORTRAN (F90), but I want to build the most modern software architecture out of it. while being compatible with most "recent" compilers (intel ifort, but also including sun/HP/IBM own compilers) So I'm thinking of imposing: global variable forbidden, no gotos, no jump labels, "implicit none", etc. "object-oriented programming" (modules with datatypes + related subroutines) modular/reusable functions, well documented, reusable libraries assertions/preconditions/invariants (implemented using preprocessor statements) unit tests for all (most) subroutines and "objects" an intense "debug mode" (#ifdef DEBUG) with more checks and all possible Intel compiler checks possible (array bounds, subroutine interfaces, etc.) uniform and enforced legible coding style, using code processing tools C stubs/wrappers for libpthread, libDL (and eventually GPU kernels, etc.) C/C++ implementation of utility functions (strings, file operations, sockets, memory alloc/dealloc reference counting for debug mode, etc.) ( This may all seem "evident" modern programming assumptions, but in a legacy fortran world, most of these are big changes in the typical programmer workflow ) The goal with all that is to have trustworthy, maintainable and modular code. Whereas, in typical fortran, modularity is often not a primary goal, and code is trustworthy only if the original developer was very clever, and the code was not changed since then ! (i'm a bit joking here, but not much) I searched around for references about object-oriented fortran, programming-by-contract (assertions/preconditions/etc.), and found only ugly and outdated documents, syntaxes and papers done by people with no large-scale project involvement, and dead projects. Any good URL, advice, reference paper/books on the subject?

    Read the article

  • SQL 2008: Using separate tables for each datatype to return single row

    - by Thomas C
    Hi all I thought I'd be flexible this time around and let the users decide what contact information the wish to store in their database. In theory it would look as a single row containing, for instance; name, adress, zipcode, Category X, Listitems A. Example FieldType table defining the datatypes available to a user: FieldTypeID, FieldTypeName, TableName 1,"Integer","tblContactInt" 2,"String50","tblContactStr50" ... A user the define his fields in the FieldDefinition table: FieldDefinitionID, FieldTypeID, FieldDefinitionName 11,2,"Name" 12,2,"Adress" 13,1,"Age" Finally we store the actual contact data in separate tables depending on its datatype. Master table, only contains the ContactID tblContact: ContactID 21 22 tblContactStr50: ContactStr50ID,ContactID,FieldDefinitionID,ContactStr50Value 31,21,11,"Person A" 32,21,12,"Adress of person A" 33,22,11,"Person B" tblContactInt: ContactIntID,ContactID,FieldDefinitionID,ContactIntValue 41,22,13,27 Question: Is it possible to return the content of these tables in two rows like this: ContactID,Name,Adress,Age 21,"Person A","Adress of person A",NULL 22,"Person B",NULL,27 I have looked into using the COALESCE and Temp tables, wondering if this is at all possible. Even if it is: maybe I'm only adding complexity whilst sacrificing performance for benefit in datastorage and user definition option. What do you think? Best Regards /Thomas C

    Read the article

  • C#: Accessing PerformanceCounters for the ".NET CLR Memory category"

    - by Mads Ravn
    I'm trying to access the performance counters located in ".NET CLR Memory category" through C# using the PerformanceCounter class. However a cannot instantiate the categories with what I would expect was the correct category/counter name new PerformanceCounter(".NET CLR Memory", "# bytes in all heaps", Process.GetCurrentProcess().ProcessName); I tried looping through categories and counters using the following code string[] categories = PerformanceCounterCategory.GetCategories().Select(c => c.CategoryName).OrderBy(s => s).ToArray(); string toInspect = string.Join(",\r\n", categories); System.Text.StringBuilder interestingToInspect = new System.Text.StringBuilder(); string[] interestingCategories = categories.Where(s => s.StartsWith(".NET") || s.Contains("Memory")).ToArray(); foreach (string interestingCategory in interestingCategories) { PerformanceCounterCategory cat = new PerformanceCounterCategory(interestingCategory); foreach (PerformanceCounter counter in cat.GetCounters()) { interestingToInspect.AppendLine(interestingCategory + ":" + counter.CounterName); } } toInspect = interestingToInspect.ToString(); But could not find anything that seems to match. Is it not possible to observe these values from within the CLR or am I doing something wrong. The environment, should it matter, is .NET 4.0 running on a 64-bit windows 7 box.

    Read the article

  • deepcopy and python - tips to avoid using it?

    - by blackkettle
    Hi, I have a very simple python routine that involves cycling through a list of roughly 20,000 latitude,longitude coordinates and calculating the distance of each point to a reference point. def compute_nearest_points( lat, lon, nPoints=5 ): """Find the nearest N points, given the input coordinates.""" points = session.query(PointIndex).all() oldNearest = [] newNearest = [] for n in xrange(nPoints): oldNearest.append(PointDistance(None,None,None,99999.0,99999.0)) newNearest.append(obj2) #This is almost certainly an inappropriate use of deepcopy # but how SHOULD I be doing this?!?! for point in points: distance = compute_spherical_law_of_cosines( lat, lon, point.avg_lat, point.avg_lon ) k = 0 for p in oldNearest: if distance < p.distance: newNearest[k] = PointDistance( point.point, point.kana, point.english, point.avg_lat, point.avg_lon, distance=distance ) break else: newNearest[k] = deepcopy(oldNearest[k]) k += 1 for j in range(k,nPoints-1): newNearest[j+1] = deepcopy(oldNearest[j]) oldNearest = deepcopy(newNearest) #We're done, now print the result for point in oldNearest: print point.station, point.english, point.distance return I initially wrote this in C, using the exact same approach, and it works fine there, and is basically instantaneous for nPoints<=100. So I decided to port it to python because I wanted to use SqlAlchemy to do some other stuff. I first ported it without the deepcopy statements that now pepper the method, and this caused the results to be 'odd', or partially incorrect, because some of the points were just getting copied as references(I guess? I think?) -- but it was still pretty nearly as fast as the C version. Now with the deepcopy calls added, the routine does it's job correctly, but it has incurred an extreme performance penalty, and now takes several seconds to do the same job. This seems like a pretty common job, but I'm clearly not doing it the pythonic way. How should I be doing this so that I still get the correct results but don't have to include deepcopy everywhere?

    Read the article

  • Apache attack on compromised server, iframe injected by string replace

    - by Quang-Tuan Luong
    My server has been compromised recently. This morning, I have discovered that the intruder is injecting an iframe into each of my HTML pages. After testing, I have found out that the way he does that is by getting Apache (?) to replace every instance of <body> by <iframe link to malware></iframe></body> For example if I browse a file residing on the server consisting of: </body> </body> Then my browser sees a file consisting of: <iframe link to malware></iframe></body> <iframe link to malware></iframe></body> I have immediately stopped Apache to protect my visitors, but so far I have not been able to find what the intruder has changed on the server to perform the attack. I presume he has modified an Apache config file, but I have no idea which one. In particular, I have looked for recently modified files by time-stamp, but did not find anything noteworthy. Thanks for any help. Tuan. PS: I am in the process of rebuilding a new server from scratch, but in the while, I would like to keep the old one running, since this is a business site.

    Read the article

  • Getting "" at the beginning of my XML File after save()

    - by Remy
    I'm opening an existing XML file with c# and I replace some nodes in there. All works fine. Just after I save it, I get the following characters at the beginning of the file:  (EF BB BF in HEX) The whole first line: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> The rest of the file looks like a normal XML file. The simplified code is here: XmlDocument doc = new XmlDocument(); doc.Load(xmlSourceFile); XmlNode translation = doc.SelectSingleNode("//trans-unit[@id='127']"); translation.InnerText = "testing"; doc.Save(xmlTranslatedFile); I'm using a C# WinForm Application. With .NET 4.0. Any ideas? Why would it do that? Can we disable that somehow? It's for Adobe InCopy and it does not open it like this. UPDATE: Alternative Solution: Saving it with the XmlTextWriter works too: XmlTextWriter writer = new XmlTextWriter(inCopyFilename, null); doc.Save(writer);

    Read the article

  • Test-First development tool for SQL Server 2005?

    - by Jeff Jones
    For several years I have been using a testing tool called qmTest that allows me to do test-driven database development for some Firebird databases. I write a test for a new feature (table, trigger, stored procedure, etc.) until it fails, then modify the database until the test passes. If necessary, I do more work on the test until it fails again, then modify the database until the test passes. Once the test for the feature is complete and passes 100% of the time, I save it in a suite of other tests for the database. Before moving on to another test or a deployment, I run all the tests as a suite to make sure nothing is broken. Tests can have dependencies on other tests, and the results are recorded and displayed in a browser. Nothing new here, I am sure. Our shop is aiming toward standardizing on MSSQLServer and I want to use the same procedure for developing our databases. Does anyone know of tools that allow or encourage this kind of development? I believe the Team System does, but we do not own that at this point, and probably will not for some time. I am not opposed to scripting, but would welcome a more graphical environment. Any suggestions?

    Read the article

  • SQL to get list of dates as well as days before and after without duplicates

    - by Nathan Koop
    I need to display a list of dates, which I have in a table SELECT mydate AS MyDate, 1 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; Jan 1, 2010 - 1 Jan 2, 2010 - 1 Jan 10, 2010 - 1 No problem. However, I now need to display the date before and the date after as well with a different DateType. Dec 31, 2009 - 2 Jan 1, 2010 - 1 Jan 2, 2010 - 1 Jan 3, 2010 - 2 Jan 9, 2010 - 2 Jan 10, 2010 - 1 Jan 11, 2010 - 2 I thought I could use a union SELECT MyDate, DateType FROM ( SELECT mydate - 1 AS MyDate, 2 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; UNION SELECT mydate + 1 AS MyDate, 2 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; UNION SELECT mydate AS MyDate, 1 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; ) AS myCombinedDateTable This however includes duplicates of the original dates. Dec 31, 2009 - 2 Jan 1, 2009 - 2 Jan 1, 2010 - 1 Jan 2, 2010 - 2 Jan 2, 2010 - 1 Jan 3, 2010 - 2 Jan 9, 2010 - 2 Jan 10, 2010 - 1 Jan 11, 2010 - 2 How can I best remove these duplicates? I am considering a temporary table, but am unsure if that is the best way to do it. This also appears to me that it may provide performance issues as I am running the same query three separate times. What would be the best way to handle this request?

    Read the article

  • Calculate total batch upload transfer percent with limited information

    - by GONeale
    Hi there, I have a system which uploads to a server file by file and displays a progress bar on file upload progress, then underneath a second progress bar which I want to indicate percentage of batch complete across all files queued to upload. Information and algorithms I can work out are: Bytes Sent / Total Bytes To Send = First progress bar (eg. 512KB of 1024KB (50%)) That works fine. However supposing I have two other files left to upload, but both file sizes are unknown (as this is only known once the file is about to commence upload, at which point it is compressed and file size is determined) how would I go about making my third progress bar? I didn't think this would be possible as I would need "Total Bytes Sent" / "Total Bytes To Send", to replicate the logic of my first progress bar on a larger scale, however I did get a version working: "Current file number we are on" / "total number of files to send" returning the percentage through the batch, however obviously will not incrementally update and it's pretty crude. So on further thinking I thought if I could incorporate the current file % with this algorithm I could perhaps get the correct progress percentage of my batch's current point. I tried this algorithm, but alas to no such avail (sorry to any math heads, it's probably quite apparent why it won't work) ("Current file number we are on" / "total number of files to send") * ("Bytes Sent" / "Total Bytes To Send") For example I thought I was on the right track when testing with this example: 2/3 (2nd of 3rd file) = 66% (this is right so far) but then when I added * 0.20 (for indicating only 20% of 2nd file has uploaded) we went back to 13%. What I need is only a little over 33%! I did try the inverse at 0.80 and a (2/3 * (2/3 * 0.2)) Can this be done without knowing entire bytes in batch to upload? Please help! Thank you!

    Read the article

  • A map and set which uses contiguous memory and has a reserve function

    - by edA-qa mort-ora-y
    I use several maps and sets. The lack of contiguous memory, and high number of (de)allocations, is a performance bottleneck. I need a mainly STL-compatbile map and set class which can use a contiguous block of memory for internal objects (or multiple blocks). It also needs to have a reserve function so that I can preallocate for expected sizes. Before I write my own I'd like to check what is available first. Is there something in Boost which does this? Does somebody know of an available implementation elsewhere? Intrusive collection types are not usable here as the same objects need to exist in several collections. As far as I know STL memory pools are per-type, not per instance. These global pools are not efficient with respect to memory locality in mutli-cpu/core processing. Object pools don't work as the types will be shared between instance but their pool should not. In many cases a hash map may be an option in some cases.

    Read the article

  • C# Execute Method (with Parameters) with ThreadPool

    - by washtik
    We have the following piece of code (idea for this code was found on this website) which will spawn new threads for the method "Do_SomeWork()". This enables us to run the method multiple times asynchronously. The code is: var numThreads = 20; var toProcess = numThreads; var resetEvent = new ManualResetEvent(false); for (var i = 0; i < numThreads; i++) { new Thread(delegate() { Do_SomeWork(Parameter1, Parameter2, Parameter3); if (Interlocked.Decrement(ref toProcess) == 0) resetEvent.Set(); }).Start(); } resetEvent.WaitOne(); However we would like to make use of ThreadPool rather than create our own new threads which can be detrimental to performance. The question is how can we modify the above code to make use of ThreadPool keeping in mind that the method "Do_SomeWork" takes multiple parameters and also has a return type (i.e. method is not void). Also, this is C# 2.0.

    Read the article

  • How does PHP PDO work internally ?

    - by Rachel
    I want to use pdo in my application, but before that I want to understand how internally PDOStatement->fetch and PDOStatement->fetchAll. For my application, I want to do something like "SELECT * FROM myTable" and insert into csv file and it has around 90000 rows of data. My question is, if I use PDOStatement->fetch as I am using it here: // First, prepare the statement, using placeholders $query = "SELECT * FROM tableName"; $stmt = $this->connection->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Will after every fetch from database, result for that fetch would be store in memory ? Meaning when I do second fetch, memory would have data of first fetch as well as data for second fetch. And so if I have 90000 rows of data and if am doing fetch every time than memory is being updated to take new fetch result without removing results from previous fetch and so for the last fetch memory would already have 89999 rows of data. Is this how PDOStatement::fetch works ? Performance wise how does this stack up against PDOStatement::fetchAll ?

    Read the article

  • Using MSBuild 4 command line to publish ASP.NET web application

    - by meandmycode
    In previous msbuild we used the target '_CopyWebApplication' in order to build and convert the source of a project into a published site, this worked OK, but wasn't ideal. In .NET 4, the publishing process is somewhat more sophisticated and additionally seems a bit of a black box to understand. Whilst packages look great, I cannot fully understand how they can be harnessed by a build server, the build server would not get any manifest information, and equally, something (msbuild?) is CREATING this manifest information FROM the project file. In our build server, I ideally want to say, here is my csproj file, deploy it by the package configuration 'x'. I'm trying to understand the workflow I need to make this happen. Right now when I use _CopyWebApplication, the result is different to doing a publish from visual studio 2010, primarily that web.config transforms aren't processed, and obviously msdeploy isn't involved at all. Can somebody point me in the right direction, I believe I need to get msbuild to do the equiv of 'Build Deployment Package', and then use msdeploy to deploy this from our build server to our CI testing environments. I know this is a very vague post, but I hope somebody can give me some hints, I'll be continuing research also, so if I make any progress, I'll post my findings here. Thanks in advance, Stephen.

    Read the article

  • Question about the Cloneable interface and the exception that should be thrown

    - by Nazgulled
    Hi, The Java documentation says: A class implements the Cloneable interface to indicate to the Object.clone() method that it is legal for that method to make a field-for-field copy of instances of that class. Invoking Object's clone method on an instance that does not implement the Cloneable interface results in the exception CloneNotSupportedException being thrown. By convention, classes that implement this interface should override Object.clone (which is protected) with a public method. See Object.clone() for details on overriding this method. Note that this interface does not contain the clone method. Therefore, it is not possible to clone an object merely by virtue of the fact that it implements this interface. Even if the clone method is invoked reflectively, there is no guarantee that it will succeed. And I have this UserProfile class: public class UserProfile implements Cloneable { private String name; private int ssn; private String address; public UserProfile(String name, int ssn, String address) { this.name = name; this.ssn = ssn; this.address = address; } public UserProfile(UserProfile user) { this.name = user.getName(); this.ssn = user.getSSN(); this.address = user.getAddress(); } // get methods here... @Override public UserProfile clone() { return new UserProfile(this); } } And for testing porpuses, I do this in main(): UserProfile up1 = new UserProfile("User", 123, "Street"); UserProfile up2 = up1.clone(); So far, no problems compiling/running. Now, per my understanding of the documentation, removing implements Cloneable from the UserProfile class should throw an exception in up1.clone() call, but it doesn't. I've read around here that the Cloneable interface is broken but I don't really know what that means. Am I missing something?

    Read the article

< Previous Page | 751 752 753 754 755 756 757 758 759 760 761 762  | Next Page >