Search Results

Search found 37647 results on 1506 pages for 'sql performance'.

Page 516/1506 | < Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >

  • Figuring out the Nyquist performance limitation of an ADC on an example PIC microcontroller

    - by AKE
    I'm spec-ing the suitability of a dsPIC microcontroller for an analog-to-digital application. This would be preferable to using dedicated A/D chips and a separate dedicated DSP chip. To do that, I've had to run through some computations, pulling the relevant parameters from the datasheets. I'm not sure I've got it right -- would appreciate a check! (EDITED NOTE: The PIC10F220 in the example below was selected ONLY to walk through a simple example to check that I'm interpreting Tacq, Fosc, TAD, and divisor correctly in working through this sort of Nyquist analysis. The actual chips I'm considering for the design are the dsPIC33FJ128MC804 (with 16b A/D) or dsPIC30F3014 (with 12b A/D).) A simple example: PIC10F220 is the simplest possible PIC with an ADC Runs at clock speed of 8MHz. Has an instruction cycle of 0.5us (4 clock steps per instruction) So: Taking Tacq = 6.06 us (acquisition time for ADC, assume chip temp. = 50*C) [datasheet p34] Taking Fosc = 8MHz (? clock speed) Taking divisor = 4 (4 clock steps per CPU instruction) This gives TAD = 0.5us (TAD = 1/(Fosc/divisor) ) Conversion time is 13*TAD [datasheet p31] This gives conversion time 6.5us ADC duration is then 12.56 us [? Tacq + 13*TAD] Assuming at least 2 instructions for load/store: This is another 1 us [0.5 us per instruction] Which would give max sampling rate of 73.7 ksps (1/13.56) Supposing 8 more instructions for real-time processing: This is another 4 us Thus, total ADC/handling time = 17.56us (12.56us + 1us + 4us) So expected upper sampling rate is 56.9 ksps. Nyquist frequency for this sampling rate is therefore 28 kHz. If this is right, it suggests the (theoretical) performance suitability of this chip's A/D is for signals that are bandlimited to 28 kHz. Is this a correct interpretation of the information given in the data sheet in obtaining the Nyquist performance limit? Any opinions on the noise susceptibility of ADCs in PIC / dsPIC chips would be much appreciated! AKE

    Read the article

  • jQuery performance

    - by jAndy
    Hi Folks, imagine you have to do DOM manipulation like a lot (in my case, it's kind of a dynamic list). Look at this example: var $buffer = $('<ul/>', { 'class': '.custom-example', 'css': { 'position': 'absolute', 'top': '500px' } }); $.each(pages[pindex], function(i, v){ $buffer.append(v); }); $buffer.insertAfter($root); "pages" is an array which holds LI elements as jQuery object. "$root" is an UL element What happens after this code is, both UL's are animated (scrolling) and finally, within the callback of animate this code is executed: $root.detach(); $root = $buffer; $root.css('top', '0px'); $buffer = null; This works very well, the only thing I'm pi**ed off is the performance. I do cache all DOM elements I'm laying a hand on. Without looking too deep into jQuery's source code, is there a chance that my performance issues are located there? Does jQuery use DocumentFragments to append things? If you create a new DOM element with var new = $('<div/>') it is only stored in memory at this point isnt it?

    Read the article

  • WF performance with new 20,000 persisted workflow instances each month

    - by Nikola Stjelja
    Windows Workflow Foundation has a problem that is slow when doing WF instances persistace. I'm planning to do a project whose bussiness layer will be based on WF exposed WCF services. The project will have 20,000 new workflow instances created each month, each instance could take up to 2 months to finish. What I was lead to belive that given WF slownes when doing peristance my given problem would be unattainable given performance reasons. I have the following questions: Is this true? Will my performance be crap with that load(given WF persitance speed limitations) How can I solve the problem? We currently have two possible solutions: 1. Each new buisiness process request(e.g. Give me a new drivers license) will be a new WF instance, and the number of persistance operations will be limited by forwarding all status request operations to saved state values in a separate database. 2. Have only a small amount of Workflow Instances up at any give time, without any persistance ofso ever(only in case of system crashes etc.), by breaking each workflow stap in to a separate worklof and that workflow handling each business process request instance in the system that is at that current step(e.g. I'm submitting my driver license reques form, which is step one... we have 100 cases of that, and my step one workflow will handle every case simultaneusly). I'm very insterested in solution for that problem. If you want to discuss that problem pleas be free to mail me at [email protected]

    Read the article

  • Improve performance of searching JSON object with jQuery

    - by cale_b
    Please forgive me if this is answered on SO somewhere already. I've searched, and it seems as though this is a fairly specific case. Here's an example of the JSON (NOTE: this is very stripped down - this is dynamically loaded, and currently there are 126 records): var layout = { "2":[{"id":"40","attribute_id":"2","option_id":null,"design_attribute_id":"4","design_option_id":"131","width":"10","height":"10", "repeat":"0","top":"0","left":"0","bottom":"0","right":"0","use_right":"0","use_bottom":"0","apply_to_options":"0"}, {"id":"41","attribute_id":"2","option_id":"115","design_attribute_id":"4","design_option_id":"131","width":"2","height":"1", "repeat":"0","top":"0","left":"0","bottom":"4","right":"2","use_right":"0","use_bottom":"0","apply_to_options":"0"}, {"id":"44","attribute_id":"2","option_id":"118","design_attribute_id":"4","design_option_id":"131","width":"10","height":"10", "repeat":"0","top":"0","left":"0","bottom":"0","right":"0","use_right":"0","use_bottom":"0","apply_to_options":"0"}], "5":[{"id":"326","attribute_id":"5","option_id":null,"design_attribute_id":"4","design_option_id":"154","width":"5","height":"5", "repeat":"0","top":"0","left":"0","bottom":"0","right":"0","use_right":"0","use_bottom":"0","apply_to_options":"0"}] }; I need to match the right combination of values. Here's the function I currently use: function drawOption(attid, optid) { var attlayout = layout[attid]; $.each(attlayout, function(k, v) { // d_opt_id and d_opt_id are global scoped variable set elsewhere if (v.design_attribute_id == d_att_id && v.design_option_id == d_opt_id && v.attribute_id == attid && ((v.apply_to_options == 1 || (v.option_id === optid)))) { // Do stuff here } }); } The issue is that I might iterate through 10-15 layouts (unique attid's), and any given layout (attid) might have as many as 50 possibilities, which means that this loop is being run A LOT. Given the multiple criteria that have to be matched, would an AJAX call work better? (This JSON is dynamically created via PHP, so I could craft a PHP function that could possibly do this more efficently), or am I completely missing something about how to find items in a JSON object? As always, any suggestions for improving the code are welcome! EDIT: I apologize for not making this clear, but the purpose of this question is to find a way to improve the performance. The page has a lot of javascript, and this is a location where I know that performance is lower than it could be.

    Read the article

  • Ado.net Fill method not throwing error on running a Stored Procedure that does not exist.

    - by Mike
    I am using a combination of the Enterprise library and the original Fill method of ADO. This is because I need to open and close the command connection myself as I am capture the event Info Message Here is my code so far // Set Up Command SqlDatabase db = new SqlDatabase(ConfigurationManager.ConnectionStrings[ConnectionName].ConnectionString); SqlCommand command = db.GetStoredProcCommand(StoredProcName) as SqlCommand; command.Connection = db.CreateConnection() as SqlConnection; // Set Up Events for Logging command.StatementCompleted += new StatementCompletedEventHandler(command_StatementCompleted); command.Connection.FireInfoMessageEventOnUserErrors = true; command.Connection.InfoMessage += new SqlInfoMessageEventHandler(Connection_InfoMessage); // Add Parameters foreach (Parameter parameter in Parameters) { db.AddInParameter(command, parameter.Name, (System.Data.DbType)Enum.Parse(typeof(System.Data.DbType), parameter.Type), parameter.Value); } // Use the Old Style fill to keep the connection Open througout the population // and manage the Statement Complete and InfoMessage events SqlDataAdapter da = new SqlDataAdapter(command); DataSet ds = new DataSet(); // Open Connection command.Connection.Open(); // Populate da.Fill(ds); // Dispose of the adapter if (da != null) { da.Dispose(); } // If you do not explicitly close the connection here, it will leak! if (command.Connection.State == ConnectionState.Open) { command.Connection.Close(); } ... Now if I pass into the variable StoredProcName = "ThisProcDoesNotExists" And run this peice of code. The CreateCommand nor da.Fill through an error message. Why is this. The only way I can tell it did not run was that it returns a dataset with 0 tables in it. But when investigating the error it is not appearant that the procedure does not exist. EDIT Upon further investigation command.Connection.FireInfoMessageEventOnUserErrors = true; is causeing the error to be surpressed into the InfoMessage Event From BOL When you set FireInfoMessageEventOnUserErrors to true, errors that were previously treated as exceptions are now handled as InfoMessage events. All events fire immediately and are handled by the event handler. If is FireInfoMessageEventOnUserErrors is set to false, then InfoMessage events are handled at the end of the procedure. What I want is each print statement from Sql to create a new log record. Setting this property to false combines it as one big string. So if I leave the property set to true, now the question is can I discern a print message from an Error ANOTHER EDIT So now I have the code so that the flag is set to true and checking the error number in the method void Connection_InfoMessage(object sender, SqlInfoMessageEventArgs e) { // These are not really errors unless the Number >0 // if Number = 0 that is a print message foreach (SqlError sql in e.Errors) { if (sql.Number == 0) { Logger.WriteInfo("Sql Message",sql.Message); } else { // Whatever this was it was an error throw new DataException(String.Format("Message={0},Line={1},Number={2},State{3}", sql.Message, sql.LineNumber, sql.Number, sql.State)); } } } The issue now that when I throw the error it does not bubble up to the statement that made the call or even the error handler that is above that. It just bombs out on that line The populate looks like // Populate try { da.Fill(ds); } catch (Exception e) { throw new Exception(e.Message, e); } Now even though I see the calling codes and methods still in the Call Stack, this exception does not seem to bubble up?

    Read the article

  • Naming multi-instance performance counters in .NET

    - by Roger Lipscombe
    Most multiple instance performance counters in Windows seem to automatically(?) have a #n on the end if there's more than one instance with the same name. For example: if, in Perfmon, you look under the Process category, you'll see: ... dwm explorer explorer#1 ... I have two explorer.exe processes, so the second counter has #1 appended to its name. When I attempt to do this in a .NET application: I can create the category, and register the instance (using the PerformanceCounterCategory.Create that takes a CounterCreationDataCollection). I can open the counter for write and write to it. When I open the counter a second time, it opens the same counter. This means that I have two applications fighting over the counters. The documentation for PerformanceCounter.InstanceName states that # is not allowed in the name. So: how do I have multiple-instance performance counters that are actually multiple instance? And where the second (and subsequent) instances get #n appended to the name? That is: I know that I can put the process ID (e.g.) on the instance name. This works, but has the unfortunate side effect that restarting the process results in a new PID, and Perfmon continues monitoring the old counter.

    Read the article

  • Improving the performance of XSL

    - by Rachel
    I am using the below XSL 2.0 code to find the ids of the text nodes that contains the list of indices that i give as input. the code works perfectly but in terms for performance it is taking a long time for huge files. Even for huge files if the index values are small then the result is quick in few ms. I am using saxon9he Java processor to execute the XSL. <xsl:variable name="insert-data" as="element(data)*"> <xsl:for-each-group select="doc($insert-file)/insert-data/data" group-by="xsd:integer(@index)"> <xsl:sort select="current-grouping-key()"/> <data index="{current-grouping-key()}" text-id="{generate-id( $main-root/descendant::text()[ sum((preceding::text(), .)/string-length(.)) ge current-grouping-key() ][1] )}"> <xsl:copy-of select="current-group()/node()"/> </data> </xsl:for-each-group> </xsl:variable> In the above solution if the index value is too huge say 270962 then the time taken for the XSL to execute is 83427ms. In huge files if the index value is huge say 4605415, 4605431 it takes several minutes to execute. Seems the computation of the variable "insert-data" takes time though it is a global variable and computed only once. Should the XSL be addessed or the processor? How can i improve the performance of the XSL.

    Read the article

  • Performance difference in for loop condition?

    - by CSharperWithJava
    Hello all, I have a simple question that I am posing mostly for my curiousity. What are the differences between these two lines of code? (in C++) for(int i = 0; i < N, N > 0; i++) for(int i = 0; i < N && N > 0; i++) The selection of the conditions is completely arbitrary, I'm just interested in the differences between , and &&. I'm not a beginner to coding by any means, but I've never bothered with the comma operator. Are there performance/behavior differences or is it purely aesthetic? One last note, I know there are bigger performance fish to fry than a conditional operator, but I'm just curious. Indulge me. Edit Thanks for your answers. It turns out the code that prompted this question had misused the comma operator in the way I've described. I wondered what the difference was and why it wasn't a && operator, but it was just written incorrectly. I didn't think anything was wrong with it because it worked just fine. Thanks for straightening me out.

    Read the article

  • How can I determine PerlLogHandler performance impact?

    - by Timmy
    I want to create a custom Apache2 log handler, and the template that is found on the apache site is: #file:MyApache2/LogPerUser.pm #--------------------------- package MyApache2::LogPerUser; use strict; use warnings; use Apache2::RequestRec (); use Apache2::Connection (); use Fcntl qw(:flock); use File::Spec::Functions qw(catfile); use Apache2::Const -compile => qw(OK DECLINED); sub handler { my $r = shift; my ($username) = $r->uri =~ m|^/~([^/]+)|; return Apache2::Const::DECLINED unless defined $username; my $entry = sprintf qq(%s [%s] "%s" %d %d\n), $r->connection->remote_ip, scalar(localtime), $r->uri, $r->status, $r->bytes_sent; my $log_path = catfile Apache2::ServerUtil::server_root, "logs", "$username.log"; open my $fh, ">>$log_path" or die "can't open $log_path: $!"; flock $fh, LOCK_EX; print $fh $entry; close $fh; return Apache2::Const::OK; } 1; What is the performance cost of the flocks? Is this logging process done in parallel, or in serial with the HTTP request? In parallel the performance would not matter as much, but I wouldn't want the user to wait another split second to add something like this.

    Read the article

  • MySQL Paritioning performance

    - by Imran Pathan
    Measured performance on key partitioned tables and normal tables separately. But we couldn't find any performance improvement with partitioning. Queries are pruned. Using MySQL 5.1.47 on RHEL 4. Table details: UserUsage - Will have entries for user mobile number and data usage for each date. Mobile number and Date as PRI KEY. UserProfile - Queries prev table and stores summary for each mobile number. Mobile number PRI KEY. CREATE TABLE `UserUsage` ( `Msisdn` decimal(20,0) NOT NULL, `Date` date NOT NULL, . . PRIMARY KEY USING BTREE (`Msisdn`,`Date`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; CREATE TABLE `UserProfile` ( `Msisdn` decimal(20,0) NOT NULL, . . PRIMARY KEY (`Msisdn`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; Second table is updated by query select and order by date in first table in a perl program, query is select * from UserUsage where Msisdn=number order by Date desc limit 7 [Process data in perl] update UserProfile values(....) where Msisdn=number explain partition for select, shows row being scanned in a particular partition only. Is something wrong with partition design or queries as partitioning is taking almost same or more time compared to normal tables?

    Read the article

  • Improving File Read Performance (single file, C++, Windows)

    - by david
    I have large (hundreds of MB or more) files that I need to read blocks from using C++ on Windows. Currently the relevant functions are: errorType LargeFile::read( void* data_out, __int64 start_position, __int64 size_bytes ) const { if( !m_open ) { // return error } else { seekPosition( start_position ); DWORD bytes_read; BOOL result = ReadFile( m_file, data_out, DWORD( size_bytes ), &bytes_read, NULL ); if( size_bytes != bytes_read || result != TRUE ) { // return error } } // return no error } void LargeFile::seekPosition( __int64 position ) const { LARGE_INTEGER target; target.QuadPart = LONGLONG( position ); SetFilePointerEx( m_file, target, NULL, FILE_BEGIN ); } The performance of the above does not seem to be very good. Reads are on 4K blocks of the file. Some reads are coherent, most are not. A couple questions: Is there a good way to profile the reads? What things might improve the performance? For example, would sector-aligning the data be useful? I'm relatively new to file i/o optimization, so suggestions or pointers to articles/tutorials would be helpful.

    Read the article

  • java + increasing performance and scalability

    - by varun
    Hi, below is the a code snippet, which returns the object of a class. now the object is basially comparing to some parameter in loop. my concern is what if there are thousands of objects in loop, in that case performance and scalability can be an issue. please suggest how to improve this code for performance part public Widget get(String name,int major,int minor,boolean exact) { Widget widgetToReturn = null; if(exact) { Widget w = new Widget(name, major, minor); // for loop using JDK 1.5 version for(Widget wid : set) { if((w.getName().equals(wid.getName())) && (wid.getVersion()).equals(w.getVersion())) { widgetToReturn = w; break; } } } else { Widget w = new Widget(name, major, minor); WidgetVersion widgetVersion = new WidgetVersion(major, minor); // for loop using JDK 1.5 version for(Widget wid : set) { WidgetVersion wv = wid.getVersion(); if((w.getName().equals(wid.getName())) && major == wv.getMajor() && WidgetVersion.isCompatibleAndNewer(wv, widgetVersion)) { widgetToReturn = wid; } else if((w.getName().equals(wid.getName())) && wv.equals(widgetVersion.getMajor(), widgetVersion.getMinor())) { widgetToReturn = w; } } } return widgetToReturn; }

    Read the article

  • Soap 1.2 Endpoint Performance

    - by mflair2000
    I have a Client WCF Host Service SOAP 1.2 Service setup and i'm having performance issues on the SOAP Java proxy. I have no control over how the Java service is setup, aside from the endpoint config. I have the Client running Asynchronously, and the host WCF service running in Async pattern, but i see the SOAP 1.2 proxy bottlenecking and handling the requests in a Synchronous way. Can someone take a look at the auto-generated SOAP 1.2 configuration below, from the SOAP 1.2 Service wsdl? Is there a way to configure this for Async way and improve performance? Could be configured for SOAP 1.1? <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true"/> </system.web> <system.serviceModel> <bindings> <customBinding> <binding name="WebserviceListenerSoap12Binding" closeTimeout="00:30:00" openTimeout="00:30:00" receiveTimeout="00:30:00" sendTimeout="00:30:00"> <textMessageEncoding maxReadPoolSize="64" maxWritePoolSize="16" messageVersion="Soap12" writeEncoding="utf-8"> <readerQuotas maxDepth="32" maxStringContentLength="20000000" maxArrayLength="20000000" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> </textMessageEncoding> <httpTransport manualAddressing="false" maxBufferPoolSize="20000000" maxReceivedMessageSize="20000000" allowCookies="false" authenticationScheme="Anonymous" bypassProxyOnLocal="false" decompressionEnabled="true" hostNameComparisonMode="StrongWildcard" keepAliveEnabled="true" maxBufferSize="20000000" proxyAuthenticationScheme="Anonymous" realm="" transferMode="Buffered" unsafeConnectionNtlmAuthentication="false" useDefaultWebProxy="true" /> </binding> </customBinding> </bindings> <client> <endpoint address="http://10.18.2.117:8080/java/webservice/WebserviceListener.WebserviceListenerHttpSoap12Endpoint/" binding="customBinding" bindingConfiguration="WebserviceListenerSoap12Binding" contract="ResolveServiceReference.WebserviceListenerPortType" name="WebserviceListenerHttpSoap12Endpoint" /> </client> </system.serviceModel> </configuration>

    Read the article

  • Developers: How does BitLocker affect performance?

    - by Chris
    I'm an ASP.NET / C# developer. I use VS2010 all the time. I am thinking of enabling BitLocker on my laptop to protect the contents, but I am concerned about performance degradation. Developers who use IDEs like Visual Studio are working on lots and lots of files at once. More than the usual office worker, I would think. So I was curious if there are other developers out there who develop with BitLocker enabled. How has the performance been? Is it noticeable? If so, is it bad? My laptop is a 2.53GHz Core 2 Duo with 4GB RAM and an Intel X25-M G2 SSD. It's pretty snappy but I want it to stay that way. If I hear some bad stories about BitLocker, I'll keep doing what I am doing now, which is keeping stuff RAR'ed with a password when I am not actively working on it, and then SDeleting it when I am done (but it's such a pain).

    Read the article

  • NHibernate Performance Optimization | Suggestions invited!!!

    - by user336749
    Hi, I’m facing an issue with NHibernate performance and can you please suggest me some optimizations? Below mentioned is a small summary of my application architecture I have a windows service which is listening to a messaging bus. On receiving a message the service creates an object out of which a property is the received xml snippet and saves the message to the DB (uses NH). There is a WPF UI with a readonly connection to the DB, and on refresh of the UI it displays the objects on the screen. While the UI does a refresh, it retrieves the xml and deserializes it , from which the object’s properties are derived and binded to the screen. For example assume an xml XXX is received by the service, it deserializes the xml , creates the book object and save it to the DB and a property/column is SCHEMA which contains the xml snippet. The UI while refreshed searches all book objects by ID and creates the book objects out of the xml which is being saved (yes, the xml is the constructor param). Now my issue is that the refresh takes more than 2 minutes to display say 50 book objects. I analyzed it using the NHibernate profiler, and found that the time spend within the DB is negligible, however time spent to create the entities is proportionally huge(10ms:1990 ms).I guess it’s due to the fairly huge size of xml snippet and it’s deserialization. My question is, how can I improve the performance. I dispose sessions after every refresh and is not lazy loading (please note that the time spend in DB is negligible). On every refresh it’s possible that all objects are updated by some downstream systems or maybe one of them are updated.Can I implement some sort of caching mechanism in this case? Thanks in advance for any suggestions. Regards, -Mike

    Read the article

  • Socket Performance C++ Or C#

    - by modernzombie
    I have to write an application that is essentially a proxy server to handle all HTTP and HTTPS requests from our server (web browsing, etc). I know very little C++ and am very comfortable writing the application features in C#. I have experimented with the proxy from Mentalis (socket proxy) which seems to work fine for small webpages but if I go to large sites like tigerdirect.ca and browse through a couple of layers it is very slow and sometimes requests don't complete and I see broken images and javascript errors. This happens with all of our vendor sites and other content heavy sites. Mentalis uses HTTP 1.0 which I know is not as efficient but should a proxy be that slow? What is an acceptable amount of performance loss from using a proxy? Would HTTP 1.1 make a noticeable difference? Would a C++ proxy be much faster than one in C#? Is the Mentalis code just not efficient? Would I be able to use a premade C++ proxy and import the DLL to C# and still get good performance or would this project call for all C++? Sorry if these are obvious questions but I have not done network programming before.

    Read the article

  • Misaligned Pointer Performance

    - by Elite Mx
    Aren't misaligned pointers (in the BEST possible case) supposed to slow down performance and in the worst case crash your program (assuming the compiler was nice enough to compile your invalid c program). Well, the following code doesn't seem to have any performance differences between the aligned and misaligned versions. Why is that? /* brutality.c */ #ifdef BRUTALITY xs = (unsigned long *) ((unsigned char *) xs + 1); #endif ... /* main.c */ #include <stdio.h> #include <stdlib.h> #define size_t_max ((size_t)-1) #define max_count(var) (size_t_max / (sizeof var)) int main(int argc, char *argv[]) { unsigned long sum, *xs, *itr, *xs_end; size_t element_count = max_count(*xs) >> 4; xs = malloc(element_count * (sizeof *xs)); if(!xs) exit(1); xs_end = xs + element_count - 1; sum = 0; for(itr = xs; itr < xs_end; itr++) *itr = 0; #include "brutality.c" itr = xs; while(itr < xs_end) sum += *itr++; printf("%lu\n", sum); /* we could free the malloc-ed memory here */ /* but we are almost done */ exit(0); } Compiled and tested on two separate machines using gcc -pedantic -Wall -O0 -std=c99 main.c for i in {0..9}; do time ./a.out; done

    Read the article

  • Dynamic connection for LINQ to SQL DataContext

    - by Steve Clements
    If for some reason you need to specify a specific connection string for a DataContext, you can of course pass the connection string when you initialise you DataContext object.  A common scenario could be a dev/test/stage/live connection string, but in my case its for either a live or archive database.   I however want the connection string to be handled by the DataContext, there are probably lots of different reasons someone would want to do this…but here are mine. I want the same connection string for all instances of DataContext, but I don’t know what it is yet! I prefer the clean code and ease of not using a constructor parameter. The refactoring of using a constructor parameter could be a nightmare.   So my approach is to create a new partial class for the DataContext and handle empty constructor in there. First from within the LINQ to SQL designer I changed the connection property to None.  This will remove the empty constructor code from the auto generated designer.cs file. Right click on the .dbml file, click View Code and a file and class is created for you! You’ll see the new class created in solutions explorer and the file will open. We are going to be playing with constructors so you need to add the inheritance from System.Data.Linq.DataContext public partial class DataClasses1DataContext : System.Data.Linq.DataContext    {    }   Add the empty constructor and I have added a property that will get my connection string, you will have whatever logic you need to decide and get the connection string you require.  In my case I will be hitting a database, but I have omitted that code. public partial class DataClasses1DataContext : System.Data.Linq.DataContext {    // Connection String Keys - stored in web.config    static string LiveConnectionStringKey = "LiveConnectionString";    static string ArchiveConnectionStringKey = "ArchiveConnectionString";      protected static string ConnectionString    {       get       {          if (DoIWantToUseTheLiveConnection) {             return global::System.Configuration.ConfigurationManager.ConnectionStrings[LiveConnectionStringKey].ConnectionString;          }          else {             return global::System.Configuration.ConfigurationManager.ConnectionStrings[ArchiveConnectionStringKey].ConnectionString;          }       }    }      public DataClasses1DataContext() :       base(ConnectionString, mappingSource)    {       OnCreated();    } }   Now when I new up my DataContext, I can just leave the constructor empty and my partial class will decide which one i need to use. Nice, clean code that can be easily refractored and tested.   Share this post :

    Read the article

  • ORM Profiler v1.1 has been released!

    - by FransBouma
    We've released ORM Profiler v1.1, which has the following new features: Real time profiling A real time viewer (RTV) has been added, which gives insight in the activity as it is received by the client, in two views: a chronological connection overview and an activity graph overview. This RTV allows the user to directly record to a snapshot using record buttons, pause the view, mark a range to create a snapshot from that range, and view graphs about the # of connection open actions and # of commands per second. The RTV has a 'range' in which it keeps live data and auto-cleans data that's older than this range. Screenshot of the activity graphs part of the real-time viewer: Low-level activity tab A new tab has been added to the Application tabs: the Low-level activity tab. This tab shows the main activity as it has been received over the named pipe. It can help to get insight in the chronological activity without the grouping over connections, so multiple connections at the same time per thread are easier to spot. Clicking a command will sync the rest of the application tabs, clicking a row will show the details below the splitter bar, as it is done with the other application tabs as well. Default application name in interceptor When an empty string or null is passed for application name to the Initialize method of the interceptor, the AppDomain's friendly name is used instead. Copy call stack to clipboard A call stack viewed in a grid in various parts of the UI is now copyable to the clipboard by clicking a button. Enable/Disable interceptor from the config file It's now possible to enable/disable the interceptor Initialization from the application's config file, using: Code: <appSettings> <add key="ORMProfilerEnabled" value="true"/> </appSettings> if value is true, the interceptor's Initialize method will proceed. If the value is false, the interceptor's Initialize method will not proceed and initialization won't be performed, meaning no interception will take place. If the setting is absent, or misconfigured, the Initialize method will proceed as normal and perform the initialization. Stored procedure calls for select databases are now properly displayed as a call For the databases: SQL Server, Oracle, DB2, Sybase ASA, Sybase ASE and Informix a stored procedure call is displayed as an execute/call statement and copy to clipboard works as-is. I'm especially happy with the new real-time profiling feature in ORM Profiler, which is the flagship feature for this release: it offers a completely new way to use the profiler, namely directly during debugging: you can immediately see what's going on without the necessity of a snapshot. The activity graph feature combined with the auto-cleanup of older data, allows you to keep the profiler open for a long period of time and see any spike of activity on the profiled application.

    Read the article

  • Data Modeling Resources

    - by Dejan Sarka
    You can find many different data modeling resources. It is impossible to list all of them. I selected only the most valuable ones for me, and, of course, the ones I contributed to. Books Chris J. Date: An Introduction to Database Systems – IMO a “must” to understand the relational model correctly. Terry Halpin, Tony Morgan: Information Modeling and Relational Databases – meet the object-role modeling leaders. Chris J. Date, Nikos Lorentzos and Hugh Darwen: Time and Relational Theory, Second Edition: Temporal Databases in the Relational Model and SQL – all theory needed to manage temporal data. Louis Davidson, Jessica M. Moss: Pro SQL Server 2012 Relational Database Design and Implementation – the best SQL Server focused data modeling book I know by two of my friends. Dejan Sarka, et al.: MCITP Self-Paced Training Kit (Exam 70-441): Designing Database Solutions by Using Microsoft® SQL Server™ 2005 – SQL Server 2005 data modeling training kit. Most of the text is still valid for SQL Server 2008, 2008 R2, 2012 and 2014. Itzik Ben-Gan, Lubor Kollar, Dejan Sarka, Steve Kass: Inside Microsoft SQL Server 2008 T-SQL Querying – Steve wrote a chapter with mathematical background, and I added a chapter with theoretical introduction to the relational model. Itzik Ben-Gan, Dejan Sarka, Roger Wolter, Greg Low, Ed Katibah, Isaac Kunen: Inside Microsoft SQL Server 2008 T-SQL Programming – I added three chapters with theoretical introduction and practical solutions for the user-defined data types, dynamic schema and temporal data. Dejan Sarka, Matija Lah, Grega Jerkic: Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft SQL Server 2012 – my first two chapters are about data warehouse design and implementation. Courses Data Modeling Essentials – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. Logical and Physical Modeling for Analytical Applications – online course I wrote for Pluralsight. Working with Temporal data in SQL Server – my latest Pluralsight course, where besides theory and implementation I introduce many original ways how to optimize temporal queries. Forthcoming presentations SQL Bits 12, July 17th – 19th, Telford, UK – I have a full-day pre-conference seminar Advanced Data Modeling Topics there.

    Read the article

  • How to prevent ‘Select *’ : The elegant way

    - by Dave Ballantyne
    I’ve been doing a lot of work with the “Microsoft SQL Server 2012 Transact-SQL Language Service” recently, see my post here and article here for more details on its use and some uses. An obvious use is to interrogate sql scripts to enforce our coding standards.  In the SQL world a no-brainer is SELECT *,  all apologies must now be given to Jorge Segarra and his post “How To Prevent SELECT * The Evil Way” as this is a blatant rip-off IMO, the only true way to check for this particular evilness is to parse the SQL as if we were SQL Server itself.  The parser mentioned above is ,pretty much, the best tool for doing this.  So without further ado lets have a look at a powershell script that does exactly that : cls #Load the assembly [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Management.SqlParser") | Out-Null $ParseOptions = New-Object Microsoft.SqlServer.Management.SqlParser.Parser.ParseOptions $ParseOptions.BatchSeparator = 'GO' #Create the object $Parser = new-object Microsoft.SqlServer.Management.SqlParser.Parser.Scanner($ParseOptions) $SqlArr = Get-Content "C:\scripts\myscript.sql" $Sql = "" foreach($Line in $SqlArr){ $Sql+=$Line $Sql+="`r`n" } $Parser.SetSource($Sql,0) $Token=[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SET $IsEndOfBatch = $false $IsMatched = $false $IsExecAutoParamHelp = $false $Batch = "" $BatchStart =0 $Start=0 $End=0 $State=0 $SelectColumns=@(); $InSelect = $false $InWith = $false; while(($Token = $Parser.GetNext([ref]$State ,[ref]$Start, [ref]$End, [ref]$IsMatched, [ref]$IsExecAutoParamHelp ))-ne [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::EOF) { $Str = $Sql.Substring($Start,($End-$Start)+1) try{ ($TokenPrs =[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]$Token) | Out-Null #Write-Host $TokenPrs if($TokenPrs -eq [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SELECT){ $InSelect =$true $SelectColumns+="" } if($TokenPrs -eq [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_FROM){ $InSelect =$false #Write-Host $SelectColumns -BackgroundColor Red foreach($Col in $SelectColumns){ if($Col.EndsWith("*")){ Write-Host "select * is not allowed" exit } } $SelectColumns =@() } }catch{ #$Error $TokenPrs = $null } if($InSelect -and $TokenPrs -ne [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SELECT){ if($Str -eq ","){ $SelectColumns+="" }else{ $SelectColumns[$SelectColumns.Length-1]+=$Str } } } OK, im not going to pretend that its the prettiest of powershell scripts,  but if our parsed script file “C:\Scripts\MyScript.SQL” contains SELECT * then “select * is not allowed” will be written to the host.  So, where can this go wrong ?  It cant ,or at least shouldn’t , go wrong, but it is lacking in functionality.  IMO, Select * should be allowed in CTEs, views and Inline table valued functions at least and as it stands they will be reported upon. Anyway, it is a start and is more reliable that other methods.

    Read the article

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

  • Alternatives for comparing data from different databases

    - by Alex
    I have two huge tables on separate databases. One of them has the information of all the SMS that passed through the company's servers while the other one has the information of the actual billing of those SMS. My job is to compare samples of both of these tables (for example, the records between 1 and 2 pm) to see if there are any differences: SMS that were sent but not charged to the user for whatever reason that may be happening. The columns I will be using to compare are the remitent's phone number and the exact date the SMS was sent. An issue here is that dates usually are the same on both sides, but in many cases differ by 1 or 2 seconds. I have, so far, two alternatives to do this: (PL/SQL) Create two tables where i'm going to temporarily store all the records of that 1hour sample. One for each of the main tables. Then, for each distinct phone number, select the time of every SMS sent from that phone from both my temporary tables and start comparing one by one using cursors. In this case, the procedure would be ran on the server where one of the sources is so the contents of the other one would be looked up using a dblink. (sqlplus + c++) Instead of storing the 1hour samples in new tables, output the query to a text file. I will have two text files, one for each source. Then, open the first file and load all of it's content on a hash_map (key-value) using c++, where the key will be the phone number and the value a list of times of SMS sent from that phone. Finally, open the second file, grab each line (in this format: numberX timeX), look for numberX's entry on the hash_map (wich will be a list of times) and then check if timeX is on that list. If it isn't, save it somewhere to finally store it on a "uncharged" table (this would also be the final step on case 1) My main concern is efficiency. These samples have about 2 million records on each source, so just grabbing one record on one side and looking it up on the other would not be possible. That's the reason I wanted to use hash_maps Which do you think is a better option?

    Read the article

  • Looking for recommendations for a server-side newsletter program

    - by Sparky672
    Hello- I'm currently using a server-side SQL based mailing list program called Php-List on multiple sites and it works fairly well. But installation and setup is quite cumbersome, quirky and the interface is not well organized... neither is the code... with pieces all over the place in random fashion. Customizing the "look & feel" and full site integration are both tedious and painful. Upgrading the version is made more complex since multiple edits need to be manually transferred each time. Also, probably due to a poor English translation, descriptions and instructions within certain areas of the user interface are contradictory and unclear. You just have to play with it and remember what you did last time it worked. It's supposed to be so my customers can send out their own newsletters... after supplying a written tutorial, about half of them seem to stumble through it okay and the other half just hire me to do it for them. So not quite easy enough for most average people to use. I'm looking for something that's as easy for them as using a blog or discussion forum. It also must be easier to set up and integrate into a site than Php-List. I have no problem getting dirty and writing CSS or HTML by hand. Nor do I have any problem editing the program code. Perhaps what I'm looking for is a solution that is more organized, a better GUI, and template or "skin" based. Therefore, if I spend many hours customizing a skin, I can simply update the program and re-use my custom skin without having to reproduce the tedious setup over and over. (I currently maintain a list of about 25 things I must manually edit or add to multiple files in multiple directories each time I install or upgrade Php-List) A great example of what I'm looking for is very much like WordPress or phpBB. They're both easy to install and customize yet powerful and packed full of features. They're also VERY well organized making customization less painful. So enough yammering for now... anyone know of something, besides Php-List, with many of the same features as Php-List; maintaining a mailing list with a server-side database, custom sign-up pages, automatic opt-in opt-out, allowing custom HTML newsletter templates, etc? Thank-you!

    Read the article

  • ssh + tinyproxy: poor performance

    - by Paul
    I am currently in China and I would like to still visit some blocked websites (facebook, youtube). I have VPS in the USA and I have installed tinyproxy on it. I log in on my VPS with SSH port-forwarding and I have configured my browser appropriately. Everything works more or less: I can surf to those websites but everything is inusually slow and sometimes data transfer stops abruptly. This probably has to do with the fact that I see some errors in my shell on the VPS like : channel 6: open failed: connect failed: Also in the log-file of tinyproxy I see some bad things: ERROR Sep 06 14:52:14 [28150]: getpeer_information: getpeername() error: Transport endpoint is not connected ERROR Sep 06 14:52:15 [28153]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28168]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28151]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28143]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:17 [28147]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:23 [28137]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:26 [28168]: getpeer_information: getpeername() error: Transport endpoint is not connected ERROR Sep 06 14:52:27 [28186]: read_request_line: Client (file descriptor: 7) closed socket before read. ERROR Sep 06 14:52:31 [28160]: getpeer_information: getpeername() error: Transport endpoint is not connected

    Read the article

< Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >