Search Results

Search found 37647 results on 1506 pages for 'sql performance'.

Page 856/1506 | < Previous Page | 852 853 854 855 856 857 858 859 860 861 862 863  | Next Page >

  • Does Microsoft hate firefox? ASP.Net gridview performance in firefox bug?

    - by Maxim Gershkovich
    Could someone please explain the significant difference in speed between a firefox updatepanel async postback and one performed in IE? Average Firefox Postback Time For 500 objects: 1.183 Second Average IE Postback Time For 500 objects: 0.295 Seconds Using firebug I can see that the majority of this time in FireFox is spent on the server side. A total of 1.04 seconds. Given this fact the only thing I can assume is causing this problem is the way that ASP.Net renders its controls between the two browsers. Has anyone run into this problem before? VB.Net Code Protected Sub Button1_Click(ByVal sender As Object, ByVal e As EventArgs) Handles Button1.Click GridView1.DataBind() End Sub Public Function GetStockList() As StockList Dim res As New StockList For l = 0 To 500 Dim x As New Stock With {.Description = "test", .ID = Guid.NewGuid} res.Add(x) Next Return res End Function Public Class Stock Private m_ID As Guid Private m_Description As String Public Sub New() End Sub Public Property ID() As Guid Get Return Me.m_ID End Get Set(ByVal value As Guid) Me.m_ID = value End Set End Property Public Property Description() As String Get Return Me.m_Description End Get Set(ByVal value As String) Me.m_Description = value End Set End Property End Class Public Class StockList Inherits List(Of Stock) End Class Markup <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <script type="text/javascript" language="Javascript"> function timestamp_class(this_current_time, this_start_time, this_end_time, this_time_difference) { this.this_current_time = this_current_time; this.this_start_time = this_start_time; this.this_end_time = this_end_time; this.this_time_difference = this_time_difference; this.GetCurrentTime = GetCurrentTime; this.StartTiming = StartTiming; this.EndTiming = EndTiming; } //Get current time from date timestamp function GetCurrentTime() { var my_current_timestamp; my_current_timestamp = new Date(); //stamp current date & time return my_current_timestamp.getTime(); } //Stamp current time as start time and reset display textbox function StartTiming() { this.this_start_time = GetCurrentTime(); //stamp current time } //Stamp current time as stop time, compute elapsed time difference and display in textbox function EndTiming() { this.this_end_time = GetCurrentTime(); //stamp current time this.this_time_difference = (this.this_end_time - this.this_start_time) / 1000; //compute elapsed time return this.this_time_difference; } //--> </script> <script type="text/javascript" language="javascript"> var time_object = new timestamp_class(0, 0, 0, 0); //create new time object and initialize it Sys.WebForms.PageRequestManager.getInstance().add_beginRequest(BeginRequestHandler); Sys.WebForms.PageRequestManager.getInstance().add_endRequest(EndRequestHandler); function BeginRequestHandler(sender, args) { var elem = args.get_postBackElement(); ActivateAlertDiv('visible', 'divAsyncRequestTimer', elem.value + ''); time_object.StartTiming(); } function EndRequestHandler(sender, args) { ActivateAlertDiv('visible', 'divAsyncRequestTimer', '(' + time_object.EndTiming() + ' Seconds)'); } function ActivateAlertDiv(visstring, elem, msg) { var adiv = $get(elem); adiv.style.visibility = visstring; adiv.innerHTML = msg; } </script> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <Triggers> <asp:AsyncPostBackTrigger ControlID="Button1" EventName="click" /> </Triggers> <ContentTemplate> <asp:UpdateProgress ID="UpdateProgress1" runat="server" AssociatedUpdatePanelID="UpdatePanel1"> </asp:UpdateProgress> <asp:Button ID="Button1" runat="server" Text="Button" /> <div id="divAsyncRequestTimer" style="font-size:small;"> </div> <asp:GridView ID="GridView1" runat="server" DataSourceID="ObjectDataSource1" AutoGenerateColumns="False"> <Columns> <asp:BoundField DataField="ID" HeaderText="ID" SortExpression="ID" /> <asp:BoundField DataField="Description" HeaderText="Description" SortExpression="Description" /> </Columns> </asp:GridView> <asp:ObjectDataSource ID="ObjectDataSource1" runat="server" SelectMethod="GetStockList" TypeName="WebApplication1._Default"> </asp:ObjectDataSource> </ContentTemplate> </asp:UpdatePanel> </form>

    Read the article

  • slow php command line performance - is this normal or do I have an install problem?

    - by Frank Schwieterman
    I have a simple PHP app that prints 'hello world'. When I run it from the command line it takes 6 seconds. Is this normal? It seems to take 1 seconds before "hello world" prints, then 5 seconds after. I assume this is overhead of the interpreter. I am running PHP version 5.2.12 on Windows Server 2008 R2. Could this be an install issue, or is it typical? I did a manual install of PHP then added whatever components were needed to run Drupal. The only PHP addon I remember adding was MDB2, CGI support is there too. I am used to a Lua project I run from the command line, hundreds of lines of code that will run in under a second. I have some unit tests I run from the command line, and already with just a few they are very slow. I run them from Netbeans and the tests are still very slow.

    Read the article

  • [CODE GENERATION] How to generate DELETE statements in PL/SQL, based on the tables FK relations?

    - by The chicken in the kitchen
    Is it possible via script/tool to generate authomatically many delete statements based on the tables fk relations, using Oracle PL/SQL? In example: I have the table: CHICKEN (CHICKEN_CODE NUMBER) and there are 30 tables with fk references to its CHICKEN_CODE that I need to delete; there are also other 150 tables foreign-key-linked to that 30 tables that I need to delete first. Is there some tool/script PL/SQL that I can run in order to generate all the necessary delete statements based on the FK relations for me? (by the way, I know about cascade delete on the relations, but please pay attention: I CAN'T USE IT IN MY PRODUCTION DATABASE, because it's dangerous!) I'm using Oracle DataBase 10G R2. This is the result I've written, but it is not recursive: This is a view I have previously written, but of course it is not recursive! CREATE OR REPLACE FORCE VIEW RUN ( OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, VINCOLO ) AS SELECT OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, '(' || LTRIM ( EXTRACT (XMLAGG (XMLELEMENT ("x", ',' || COLUMN_NAME)), '/x/text()'), ',') || ')' VINCOLO FROM ( SELECT CON1.OWNER OWNER_1, CON1.TABLE_NAME TABLE_NAME_1, CON1.CONSTRAINT_NAME CONSTRAINT_NAME_1, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME FROM DBA_CONSTRAINTS CON, DBA_CONS_COLUMNS COL, DBA_CONSTRAINTS CON1 WHERE CON.OWNER = 'TABLE_OWNER' AND CON.TABLE_NAME = 'TABLE_OWNED' AND ( (CON.CONSTRAINT_TYPE = 'P') OR (CON.CONSTRAINT_TYPE = 'U')) AND COL.TABLE_NAME = CON1.TABLE_NAME AND COL.CONSTRAINT_NAME = CON1.CONSTRAINT_NAME --AND CON1.OWNER = CON.OWNER AND CON1.R_CONSTRAINT_NAME = CON.CONSTRAINT_NAME AND CON1.CONSTRAINT_TYPE = 'R' GROUP BY CON1.OWNER, CON1.TABLE_NAME, CON1.CONSTRAINT_NAME, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME) GROUP BY OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME; ... and it contains the error of using DBA_CONSTRAINTS instead of ALL_CONSTRAINTS...

    Read the article

  • wxPython - ListCrtl and SQLite3

    - by Dunwitch
    I'm trying to get a SQLite3 DB to populate a wx.ListCrtl. I can get it to print to stdout/stderr without any problem. I just can't seem to figure out how to display the data in the DataWindow/DataList? I'm sure I've made some code mistakes, so any help is appreciated. Main.py import wx import wx.lib.mixins.listctrl as listmix from database import * import sys class DataWindow(wx.Frame): def __init__(self, parent = None): wx.Frame.__init__(self, parent, -1, 'DataList', size=(640,480)) self.win = DataList(self) self.Center() self.Show(True) class DataList(wx.ListCtrl, listmix.ListCtrlAutoWidthMixin, listmix.ColumnSorterMixin): def __init__(self, parent = DataWindow): wx.ListCtrl.__init__( self, parent, -1, style=wx.LC_REPORT|wx.LC_VIRTUAL|wx.LC_HRULES|wx.LC_VRULES) #building the columns self.InsertColumn(0, "Location") self.InsertColumn(1, "Address") self.InsertColumn(2, "Subnet") self.InsertColumn(3, "Gateway") self.SetColumnWidth(0, 100) self.SetColumnWidth(1, 150) self.SetColumnWidth(2, 150) self.SetColumnWidth(3, 150) class MainWindow(wx.Frame): def __init__(self, parent = None, id = -1, title = "MainWindow"): wx.Frame.__init__(self, parent, id, title, size = (800,600), style = wx.DEFAULT_FRAME_STYLE ^ (wx.RESIZE_BORDER)) # StatusBar self.CreateStatusBar() # Filemenu filemenu = wx.Menu() # Filemenu - About menuitem = filemenu.Append(-1, "&About", "Information about this application") self.Bind(wx.EVT_MENU, self.onAbout, menuitem) #Filemenu - Data menuitem = filemenu.Append(-1, "&Data", "Get data") self.Bind(wx.EVT_MENU, self.onData, menuitem) # Filemenu - Seperator filemenu.AppendSeparator() #Filemenu - Exit menuitem = filemenu.Append(-1, "&Exit", "Exit the application") self.Bind(wx.EVT_MENU, self.onExit, menuitem) # Menubar menubar = wx.MenuBar() menubar.Append(filemenu, "&File") self.SetMenuBar(menubar) # Show self.Show(True) self.Center() def onAbout(self, event): pass def onData(self, event): DataWindow(self) callDb = Database() sql = "SELECT rowid, address, subnet, gateway FROM pod1" records = callDb.select(sql) for v in records: print "How do I get the records on the DataList?" #print "%s%s%s" % (v[1],v[2],v[3]) #for v in records: #DataList.InsertStringItem("%s") % (v[0], v[1], v[2]) def onExit(self, event): self.Close() self.Destroy() def onSave(self, event): pass if __name__ == '__main__': app = wx.App() frame = MainWindow(None, -1) frame.Show() app.MainLoop() database.py import os import sqlite3 class Database(object): def __init__(self, db_file="data/data.sqlite"): database_allready_exists = os.path.exists(db_file) self.db = sqlite3.connect(db_file) if not database_allready_exists: self.setupDefaultData() def select(self,sql): cursor = self.db.cursor() cursor.execute(sql) records = cursor.fetchall() cursor.close return records def insert(self,sql): newID = 0 cursor = self.db.cursor() cursor.execute(sql) newID = cursor.lastrowid self.db.commit() cursor.close() return newID def save(self,sql): cursor = self.db.cursor() cursor.execute(sql) self.db.commit() cursor.close() def setupDefaultData(self): pass

    Read the article

  • Does lamda in List.ForEach leads to memory leaks and performance problems ?

    - by Monomachus
    I have a problem which I could solve using something like this sortedElements.ForEach((XElement el) => PrintXElementName(el,i++)); And this means that I have in ForEach a lambda which permits using parameters like int i. I like that way of doing it, but i read somewhere that anonymous methods and delegates with lambda leads to a lot of memory leaks because each time when lambda is executed something is instantiated but is not released. Something like that. Could you please tell me if this is true in this situation and if it is why?

    Read the article

  • How does the verbosity of identifiers affect the performance of a programmer?

    - by DR
    I always wondered: Are there any hard facts which would indicate that either shorter or longer identifiers are better? Example: clrscr() opposed to ClearScreen() Short identifiers should be faster to read because there are fewer characters but longer identifiers often better resemble natural language and therefore also should be faster to read. Are there other aspects which suggest either a short or a verbose style? EDIT: Just to clarify: I didn't ask: "What would you do in this case?". I asked for reasons to prefer one over the other, i.e. this is not a poll question. Please, if you can, add some reason on why one would prefer one style over the other.

    Read the article

  • How can I extend a LINQ-to-SQL class without having to make changes every time the code is generated

    - by csharpnoob
    Hi, Update from comment: I need to extend linq-to-sql classes by own parameters and dont want to touch any generated classes. Any better suggestes are welcome. But I also don't want to do all attributes assignments all time again if the linq-to-sql classes are changing. so if vstudio generates new attribute to a class i have my own extended attributes kept separate, and the new innerited from the class itself Original question: i'm not sure if it's possible. I have a class car and a class mycar extended from class car. Class mycar has also a string list. Only difference. How can i cast now any car object to a mycar object without assigning all attributes each by hand. Like: Car car = new Car(); MyCar mcar = (MyCar) car; or MyCar mcar = new MyCar(car); or however i can extend car with own variables and don't have to do always Car car = new Car(); MyCar mcar = new MyCar(); mcar.name = car.name; mcar.xyz = car.xyz; ... Thanks.

    Read the article

  • GUID or int entity key with SQL Compact/EF4?

    - by David Veeneman
    This is a follow-up to an earlier question I posted on EF4 entity keys with SQL Compact. SQL Compact doesn't allow server-generated identity keys, so I am left with creating my own keys as objects are added to the ObjectContext. My first choice would be an integer key, and the previous answer linked to a blog post that shows an extension method that uses the Max operator with a selector expression to find the next available key: public static TResult NextId<TSource, TResult>(this ObjectSet<TSource> table, Expression<Func<TSource, TResult>> selector) where TSource : class { TResult lastId = table.Any() ? table.Max(selector) : default(TResult); if (lastId is int) { lastId = (TResult)(object)(((int)(object)lastId) + 1); } return lastId; } Here's my take on the extension method: It will work fine if the ObjectContext that I am working with has an unfiltered entity set. In that case, the ObjectContext will contain all rows from the data table, and I will get an accurate result. But if the entity set is the result of a query filter, the method will return the last entity key in the filtered entity set, which will not necessarily be the last key in the data table. So I think the extension method won't really work. At this point, the obvious solution seems to be to simply use a GUID as the entity key. That way, I only need to call Guid.NewGuid() method to set the ID property before I add a new entity to my ObjectContext. Here is my question: Is there a simple way of getting the last primary key in the data store from EF4 (without having to create a second ObjectContext for that purpose)? Any other reason not to take the easy way out and simply use a GUID? Thanks for your help.

    Read the article

  • Is Amazon SQS the right choice here? Rails performance issue.

    - by ole_berlin
    I'm close to releasing a rails app with the common networking features (messaging, wall, etc.). I want to use some kind of background processing (most likely Bj) for off-loading tasks from the request/response cycle. This would happen when users invite friends via email to join and for email notifications. I'm not sure if I should just drop these invites and notifications in my Database, using a model and then just process it with a worker process every x minutes or if I should go for Amazon SQS, storing the messages and invites there and let my worker retrieve it from Amazon SQS for processing (sending the invites / notifications). The Amazon approach would get load off my Database but I guess it is slower to retrieve messages from there. What do you think?

    Read the article

  • VIsual Studio 2010 Web Performance Test / Load tests / Coded UI Tests. ANYONE REALLY USE THESE?

    - by punkouter
    I can find some articles on how to use them but I can't seem to find anywhere peoples impression of them using them in real projects. I have been trying to figure out how to use them and Ive had alot of problems.. Can someone out there who uses these tools on the job give me thier impression? Are there better alternate tools available? Using these really just a waste of time ? With Coded UI Tests I see how they are good for basic javascript checking but its so basic of a example I don't think it is worth it. With web tests I like how they work but when I activate code coverage/ASP.NET profiling it doesnt work half the time.

    Read the article

  • Creating android app Database with big amount of data

    - by Thomas
    Hi all, The database of my application need to be filled with a lot of data, so during onCreate(), it's not only some create table sql instructions, there is a lot of inserts. The solution I chose is to store all this instructions in a sql file located in res/raw and which is loaded with Resources.openRawResource(id). It works well but I face to encoding issue, I have some accentuated caharacters in the sql file which appears bad in my application. This my code to do this : public String getFileContent(Resources resources, int rawId) throws IOException { InputStream is = resources.openRawResource(rawId); int size = is.available(); // Read the entire asset into a local byte buffer. byte[] buffer = new byte[size]; is.read(buffer); is.close(); // Convert the buffer into a string. return new String(buffer); } public void onCreate(SQLiteDatabase db) { try { // get file content String sqlCode = getFileContent(mCtx.getResources(), R.raw.db_create); // execute code for (String sqlStatements : sqlCode.split(";")) { db.execSQL(sqlStatements); } Log.v("Creating database done."); } catch (IOException e) { // Should never happen! Log.e("Error reading sql file " + e.getMessage(), e); throw new RuntimeException(e); } catch (SQLException e) { Log.e("Error executing sql code " + e.getMessage(), e); throw new RuntimeException(e); } The solution I found to avoid this is to load the sql instructions from a huge static final string instead of a file, and all accentutated characters appears well. But Isn't there a more elegant way to load sql instructions than a big static final String attribute with all sql instructions ? Thanks in advance Thomas

    Read the article

  • WCF interoperability with WSDL proxy and performance consideration advise.

    - by user194917
    I'm essentially writing a broker service. The requirement is that I write an API that acts as an intermediary broker between our in-house developed services and a 3rd party provided API. The intention being that my API abstract the actual communication with the 3rd party API from our internal systems. The architect on the project chose WCF as the communication framework. The problem is that 70 percent of our subscriber applications are written in .Net 2 and as such have no access to the class libraries required to implement a WCF proxy. The end result being that our proxy classes are loosely based on the code auto generated by the WSDL tool as opposed to the SvcUtil tool. My question is, although I have no issues implementing the required proxy classes using basicHttp as the actual binding and using the WSDL tool, are there any special considerations that I need to take into account in this scenario? I.E proxy optimizations and the like. Thanks in advance.

    Read the article

  • Using an embedded DB (SQLite / SQL Compact) for Message Passing within an app?

    - by wk1989
    Hello, Just out of curiosity, for applications that have a fairly complicated module tree, would something like sqlite/sql compact edition work well for message passing? So if I have modules containing data such as: \SubsystemA\SubSubSysB\ModuleB\ModuleDataC, \SubSystemB\SubSubSystemC\ModuleA\ModuleDataX Using traditional message passing/routing, you have to go through intermediate modules in order to pass a message to ModuleB to request say ModuleDataC. Instead of doing that, if we we simply store "\SubsystemA\SubSubSysB\ModuleB\ModuleDataC" in a sqlite database, getting that data is as simple as a sql query and needs no routing and passing stuff around. Has anyone done this before? Even if you haven't, do you foresee any issues & performance impact? The only concern I have right now would be the passing of custom types, e.g. if ModuleDataC is a custom data structure or a pointer, I'll need some way of storing the data structure into the DB or storing the pointer into the DB. Thanks, JW EDIT One usage case I haven't thought about is when you want to send a message from ModuleA to ModuleB to get ModuleB to do something rather than just getting/setting data. Is it possible to do this using an embedded DB? I believe callback from the DB would be needed, how feasible is this?

    Read the article

  • Custom ADO.NET provider to intercept and modify sql queries.

    - by Faisal
    Our client has an application that stores blobs in database which has now grown enough to impact the performance of SQL Server. To overcome this issue, we are planning to offload all blobs to file system and leave the path of file in a new column in user table. Like if user has a table docs with columns id, name and content (blob); we would ask him to add a new column 'filepath' in this table. Our client is willing to make this change in this database. But when it comes to changing the sql queries to read and write into this table, they are not ready to accep this. Actually, they don't want any change that results in recompilation and deployment. Now we are planning to write a custom ADO.NET provider that will intercept the select queries add a column 'filepath' at the end of the select statement retieve the result set and modify the 'content' column value based on 'filepath' value Is there any use case that you think will certainly fail with this approach? I know this sounds dirty but do we have a better way?

    Read the article

  • Which parallel sorting algorithm has the best average case performance?

    - by Craig P. Motlin
    Sorting takes O(n log n) in the serial case. If we have O(n) processors we would hope for a linear speedup. O(log n) parallel algorithms exist but they have a very high constant. They also aren't applicable on commodity hardware which doesn't have anywhere near O(n) processors. With p processors, reasonable algorithms should take O(n/p log n/p) time. In the serial case, quick sort has the best runtime complexity on average. A parallel quick sort algorithm is easy to implement (see here and here). However it doesn't perform well since the very first step is to partition the whole collection on a single core. I have found information on many parallel sort algorithms but so far I have not seen anything pointing to a clear winner. I'm looking to sort lists of 1 million to 100 million elements in a JVM language running on 8 to 32 cores.

    Read the article

  • MS DOS function like ipconfig to get system performance specs?

    - by JustADude
    I am aware of MSINFO32, but I'm wondering if there is a MS DOS command similar to ipconfig in order to get system specifications? I would like for the system specifications to be displayed in the MS DOS prompt. I would like to see at least: CPU RAM BUS speed Thanks for any insights. Edit: I am unable to install any other software, so just have to use existing DOS programming commands to extract this information. Thank you again. 2nd Edit: Whoops. Using Windows XP and Windows Vista.

    Read the article

  • Call from a singleton class to a function which in turn calls that class's method

    - by dare2be
    Hello, I am still looking for a way to phrase it properly (I'm not a native speaker...). So I have this class SQL which implements the singleton pattern (for obvious reasons) and I also have this function, checkUsr(), which queries the database using one of SQL's methods. Everything works fine as long as I don't call checkUsr() from within the SQL class. When I do so, the scripts just exits and a blank page is displayed - no errors are returned, no exception is thrown... What's happening? And how do I work around this problem? EDIT: class SQL { public static function singleton() { static $instance; if(!isset($instance)) $instance = new SQL; return $instance; } public function tryLoginAuthor( $login, $sha1 ) { (...) } } function checkUsr() { if (!isset($_SESSION['login']) || !isset($_SESSION['sha1'])) throw new Exception('Not logged in', 1); $SQL = SQL::singleton(); $res = $SQL->tryLoginAuthor($_SESSION['login'], $_SESSION['sha1']); if (!isset($res[0])) throw new Exception('Not logged in', 1); }

    Read the article

  • APC decreasing php performance??? (php 5.3, apache 2.2, windows vista 64bit)

    - by M.M.
    Hi, I have an Apache/2.2.15 (VC9) and PHP/5.3.2 (VC9 thread safe) running as an apache module on Vista 64bit machine. All running fine. Project that I'm benchmarking (with apache's ab utility) is basically standard Zend Framework project with no db connection involved. Average (median) apache response is about 0.15 seconds. After I've installed APC (3.1.4-dev VC9 thread safe) with standard settings suddenly the request response time raised to 1.3 seconds (!), which is unacceptable... All apc settings looked always good (through the apc.php script: enough shm memory, no cache full, fragmentation 0%). Only difference was to disable the stats lookup (apc.stat = 0). Then the response dropped to 0.09 seconds which was finally better than without the apc. IIRC, it's expected and obvious that the stat lookup creates some overhead, but shouldn't it still be far more performant compared to running wihout the apc extension at all? Or put it differently why is the apc.stat creating so much overhead? Apparently, something is not working as it should, I don't really know where to start looking. Thank you for your time/answers/direction in advance. Cheers, m.

    Read the article

  • Is it okay to violate the principle that collection properties should be readonly for performance?

    - by uriDium
    I used FxCop to analyze some code I had written. I had exposed a collection via a setter. I understand why this is not good. Changing the backing store when I don't expect it is a very bad idea. Here is my problem though. I retrieve a list of business objects from a Data Access Object. I then need to add that collection to another business class and I was doing it with the setter method. The reason I did this was that it is going to be faster to make an assignment than to insert hundreds of thousands of objects one at a time to the collection again via another addElement method. Is it okay to have a getter for a collection in some scenarios? I though of rather having a constructor which takes a collection? I thought maybe I could pass the object in to the Dao and let the Dao populate it directly? Are there any other better ideas?

    Read the article

< Previous Page | 852 853 854 855 856 857 858 859 860 861 862 863  | Next Page >