Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 1263/1300 | < Previous Page | 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270  | Next Page >

  • laptop crashed: why?

    - by sds
    my linux (ubuntu 12.04) laptop crashed, and I am trying to figure out why. # last sds pts/4 :0 Tue Sep 4 10:01 still logged in sds pts/3 :0 Tue Sep 4 10:00 still logged in reboot system boot 3.2.0-29-generic Tue Sep 4 09:43 - 11:23 (01:40) sds pts/8 :0 Mon Sep 3 14:23 - crash (19:19) this seems to indicate a crash at 09:42 (= 14:23+19:19). as per another question, I looked at /var/log: auth.log: Sep 4 09:17:02 t520sds CRON[32744]: pam_unix(cron:session): session closed for user root Sep 4 09:43:17 t520sds lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) no messages file syslog: Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. kern.log: Sep 4 09:24:19 t520sds kernel: [219104.819969] CPU1: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819971] CPU2: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819974] CPU3: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpuset Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpu I had a computation running until 9:24, but the system crashed 18 minutes later! kern.log has many pages of these: Sep 4 09:43:16 t520sds kernel: [ 0.000000] total RAM covered: 8086M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 64K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 2M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 4M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 8M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 16M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 32M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 64M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1G num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 2G num_reg: 10 lose cover RAM: -1G does this mean that my RAM is bad?! it also says Sep 4 09:43:16 t520sds kernel: [ 2.944123] EXT4-fs (sda1): INFO: recovery required on readonly filesystem Sep 4 09:43:16 t520sds kernel: [ 2.944126] EXT4-fs (sda1): write access will be enabled during recovery Sep 4 09:43:16 t520sds kernel: [ 3.088001] firewire_core: created device fw0: GUID f0def1ff8fbd7dff, S400 Sep 4 09:43:16 t520sds kernel: [ 8.929243] EXT4-fs (sda1): orphan cleanup on readonly fs Sep 4 09:43:16 t520sds kernel: [ 8.929249] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 658984 ... Sep 4 09:43:16 t520sds kernel: [ 9.343266] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 525343 Sep 4 09:43:16 t520sds kernel: [ 9.343270] EXT4-fs (sda1): 56 orphan inodes deleted Sep 4 09:43:16 t520sds kernel: [ 9.343271] EXT4-fs (sda1): recovery complete Sep 4 09:43:16 t520sds kernel: [ 9.645799] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) does this mean my HD is bad? As per FaultyHardware, I tried smartctl -l selftest, which uncovered no errors: smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-30-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Momentus 7200.4 Device Model: ST9500420AS Serial Number: 5VJE81YK LU WWN Device Id: 5 000c50 0440defe3 Firmware Version: 0003LVM1 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Mon Sep 10 16:40:04 2012 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 0) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 109) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x103b) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 117 099 034 Pre-fail Always - 162843537 3 Spin_Up_Time 0x0003 100 100 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 571 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 069 060 030 Pre-fail Always - 17210154023 9 Power_On_Hours 0x0032 095 095 000 Old_age Always - 174362787320258 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 571 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 1 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 061 043 045 Old_age Always In_the_past 39 (0 11 44 26) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 84 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 20 193 Load_Cycle_Count 0x0032 099 099 000 Old_age Always - 2434 194 Temperature_Celsius 0x0022 039 057 000 Old_age Always - 39 (0 15 0 0) 195 Hardware_ECC_Recovered 0x001a 041 041 000 Old_age Always - 162843537 196 Reallocated_Event_Count 0x000f 095 095 030 Pre-fail Always - 4540 (61955, 0) 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 254 Free_Fall_Sensor 0x0032 100 100 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 4545 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Googling for the messages proved inconclusive, I can't even figure out whether the messages are routine or catastrophic. So, what do I do now?

    Read the article

  • php , SimpleXML, while loop

    - by Michael
    I'm trying to get some information from ebay api and store it in database . I used simple xml to extract the information but I have a small issue as the information is not displayed for some items . if I make a print to the simple_xml I can see very well that the information is provided by ebay api . I have $items = "220617293997,250645537939,230485306218,110537213815,180519294810"; $number_of_items = count(explode(",", $items)); $xml = $baseClass->getContent("http://open.api.ebay.com/shopping?callname=GetMultipleItems&responseencoding=XML&appid=Morcovar-c74b-47c0-954f-463afb69a4b3&siteid=0&version=525&IncludeSelector=ItemSpecifics&ItemID=$items"); writeDoc($xml, "api.xml"); //echo $xml; $getvalues = simplexml_load_file('api.xml'); // print_r($getvalue); $number = "0"; while($number < 6) { $item_number = $getvalues->Item[$number]->ItemID; $location = $getvalues->Item[$number]->Location; $title = $getvalues->Item[$number]->Title; $price = $getvalues->Item[$number]->ConvertedCurrentPrice; $manufacturer = $getvalues->Item[$number]->ItemSpecifics->NameValueList[3]->Value; $model = $getvalues->Item[$number]->ItemSpecifics->NameValueList[4]->Value; $mileage = $getvalues->Item[$number]->ItemSpecifics->NameValueList[5]->Value; echo "item number = $item_number <br>localtion = $location<br>". "title = $title<br>price = $price<br>manufacturer = $manufacturer". "<br>model = $model<br>mileage = $mileage<br>"; $number++; } the above code returns item number = localtion = title = price = manufacturer = model = mileage = item number = 230485306218 localtion = Coventry, Warwickshire title = 2001 LAND ROVER RANGE ROVER VOGUE AUTO GREEN price = 3635.07 manufacturer = Land Rover model = Range Rover mileage = 76000 item number = 220617293997 localtion = Crawley, West Sussex title = 2004 CITROEN C5 HDI LX RED price = 3115.77 manufacturer = Citroen model = C5 mileage = 76000 item number = 180519294810 localtion = London, London title = 2000 VOLKSWAGEN POLO 1.4 SILVER 16V NEED GEAR BOX price = 905.06 manufacturer = Right-hand drive model = mileage = Standard Car item number = localtion = title = price = manufacturer = model = mileage = As you can see the information is not retrieved for a few items ... If I replace the $number manually like " $item_number = $getvalues-Item[4]-ItemID;" works well for any number .

    Read the article

  • ASP.net Repeater Control Problem (nothing outputted)

    - by Phil
    I have the following db code in my usercontrol (content.ascx.vb): If did = 0 Then s = "select etc (statement works on server)" x = New SqlCommand(s, c) x.Parameters.Add("@contentid", Data.SqlDbType.Int) x.Parameters("@contentid").Value = contentid c.Open() r = x.ExecuteReader If r.HasRows Then Contactinforepeater.DataSource = r End If c.Close() r.Close() Else s = "select etc (statement works on server)" x = New SqlCommand(s, c) x.Parameters.Add("@contentid", SqlDbType.Int) x.Parameters("@contentid").Value = contentid x.Parameters.Add("@did", SqlDbType.Int) x.Parameters("@did").Value = did c.Open() r = x.ExecuteReader If r.HasRows Then Contactinforepeater.DataSource = r c.Close() r.Close() End If End If Then I have the following repeater control markup in my usercontrol (content.ascx): <asp:Repeater ID="Contactinforepeater" runat="server"> <HeaderTemplate> <h1>Contact Information</h1> </HeaderTemplate> <ItemTemplate> <table width="50%"> <tr> <td colspan="2"><%#Container.DataItem("position")%></td> </tr> <tr> <td>Name:</td> <td><%#Container.DataItem("surname")%></td> </tr> <tr> <td>Telephone:</td> <td><%#Container.DataItem("telephone")%></td> </tr> <tr> <td>Fax:</td> <td><%#Container.DataItem("fax")%></td> </tr> <tr> <td>Email:</td> <td><%#Container.DataItem("email")%></td> </tr> </table> </ItemTemplate> <SeparatorTemplate><br /><hr /><br /></SeparatorTemplate> </asp:Repeater> When I insert this usercontrol into default.aspx with this code: <%@ Register src="Modules/Content.ascx" tagname="Content" tagprefix="uc1" %> and <form id="form1" runat="server"> <div> <uc1:Content ID="Content" runat="server" /> </div> </form> I do not get any error messages but the expected content from the database is not displayed. Can someone please show me the syntax to get this working or point out where I am going wrong? Thanks in advance!

    Read the article

  • Arguments for moving from LINQtoSQL to Nhibernate?

    - by sah302
    Backstory: Hi all, I just spent a lot of time reading many of the LINQ vs Nhibernate threads here and on other sites. I work in a small development team of 4 people and we don't even have really any super experienced developers. We work for a small company that has a lot of technical needs but not enough developers to implement them (and hiring more is out of the question right now). Typically our projects (which individually are fairly small) have been coded separately and weren't really layered in anyway, code wasn't re-used, no class libraries, and we just use the LINQtoSQL .dbml files for our pojects, we really don't even use objects but pass around values and stuff, the only time we use objects is when inserting to a database (heck not even querying since you don't need to assign it to a type and can just bind to gridview). Despite all this as I said our company has a lot of technical needs, no one could come to us for a year and we would have plenty of work to implement requested features. Well I have decided to change that a bit first by creating class libraries and actually adding layers to our applications. I am trying to meet these guys halfway by still using LINQtoSQL as the ORM yet and still use VB as the language. However I am finding it a b***h of a time dealing with so many thing in LINQtoSQL that I found easy in Nhibernate (automatic handling of the session, criteria creation easier than expression trees, generic an dynamic querying easier etc.) So... Question: How can I convince my lead developers and other senior programmers that switching to Nhibernate is a good thing? That being in control of our domain objects is a good thing? That being able to implement interfaces is a good? I've tried exlpaining the advantages of this before but it's not understood by them because they've never programmed in a true OO & layered way. Also one of the counter arguments to this I can see is sqlMetal generates those classes automatically and therefore it saves a lot of time. I can't really counter that other than saying spending more time on infrastructure to make it more scalable and flexible is good, but they can't see how. Again, I know the features and advantages (somewhat enough I believe) of each, but I need arguments applicable to my context, hence why I provided the context. I just am not a very good arguer I guess. (Caveat: For all the LINQtoSQL lovers, I may just not be super proficient as LINQ, but I find it very cumbersome that you are required to download some extra library for dynamic queries which don't by default support guid comparisons, and I also find the way of updating entitites to be cumbersome as well in terms of data context managing, so it could just be that I suck hehe.)

    Read the article

  • How to make multiple queries with PHP prepared statements (Commands out of sync error)

    - by Tirithen
    I'm trying to run three MySQL queries from a PHP script, the first one works fine, but on the second one I get the "Commands out of sync; you can’t run this command now" error. I have managed to understand that I need to "empty" the resultset before preparing a new query but I can't seem to understand how. I thought that $statement-close; would do that for me. Here is the relevant part of the code: <?php $statement = $db_conn->prepare("CALL getSketches(?,?)"); // Prepare SQL routine to check if user is accepted $statement->bind_param("is", $user_id, $loaded_sketches); // Bind variables to send $statement->execute(); // Execute the query $statement->bind_result( // Set return varables $id, $name, $description, $visibility, $createdby_id, $createdby_name, $createdon, $permission ); $new_sketches_id = array(); while($statement->fetch()) { $result['newSketches'][$id] = array( "name" => $name, "description" => $description, "visibility" => $visibility, "createdById" => $createdby_id, "createdByName" => $createdby_name, "createdOn" => $createdon, "permission" => $permission ); $new_sketches_id[] = $id; } $statement->close; // Close satement $new_sketches_ids = implode(",", $new_sketches_id); // Get the new sketches elements $statement = $db_conn->prepare("CALL getElements(?,'',?,'00000000000000')"); // Prepare SQL routine to check if user is accepted // The script crashes here with $db_conn->error // "Commands out of sync; you can't run this command now" $statement->bind_param("si", $new_sketches_ids, $user_id); // Bind variables to send $statement->execute(); // Execute the query $statement->bind_result( // Set return varables $id, $user_id, $type, $attribute_d, $attribute_stroke, $attribute_strokeWidth, $sketch_id, $createdon ); while($statement->fetch()) { $result['newSketches'][$sketch_id]['newElements']["u".$user_id."e".$id] = array( "type" => $type, "d" => $attribute_d, "stroke" => $attribute_stroke, "strokeWidth" => $attribute_strokeWidth, ); } $statement->close; // Close satement ?> How can I make the second query without closing and reopening the entire database connection?

    Read the article

  • .Net Dynamically Load DLL

    - by hermiod
    I am trying to write some code that will allow me to dynamically load DLLs into my application, depending on an application setting. The idea is that the database to be accessed is set in the application settings and then this loads the appropriate DLL and assigns it to an instance of an interface for my application to access. This is my code at the moment: Dim SQLDataSource As ICRDataLayer Dim ass As Assembly = Assembly. _ LoadFrom("M:\MyProgs\WebService\DynamicAssemblyLoading\SQLServer\bin\Debug\SQLServer.dll") Dim obj As Object = ass.CreateInstance(GetType(ICRDataLayer).ToString, True) SQLDataSource = DirectCast(obj, ICRDataLayer) MsgBox(SQLDataSource.ModuleName & vbNewLine & SQLDataSource.ModuleDescription) I have my interface (ICRDataLayer) and the SQLServer.dll contains an implementation of this interface. I just want to load the assembly and assign it to the SQLDataSource object. The above code just doesn't work. There are no exceptions thrown, even the Msgbox doesn't appear. I would've expected at least the messagebox appearing with nothing in it, but even this doesn't happen! Is there a way to determine if the loaded assembly implements a specific interface. I tried the below but this also doesn't seem to do anything! For Each loadedType As Type In ass.GetTypes If GetType(ICRDataLayer).IsAssignableFrom(loadedType) Then Dim obj1 As Object = ass.CreateInstance(GetType(ICRDataLayer).ToString, True) SQLDataSource = DirectCast(obj1, ICRDataLayer) End If Next EDIT: New code from Vlad's examples: Module CRDataLayerFactory Sub New() End Sub ' class name is a contract, ' should be the same for all plugins Private Function Create() As ICRDataLayer Return New SQLServer() End Function End Module Above is Module in each DLL, converted from Vlad's C# example. Below is my code to bring in the DLL: Dim SQLDataSource As ICRDataLayer Dim ass As Assembly = Assembly. _ LoadFrom("M:\MyProgs\WebService\DynamicAssemblyLoading\SQLServer\bin\Debug\SQLServer.dll") Dim factory As Object = ass.CreateInstance("CRDataLayerFactory", True) Dim t As Type = factory.GetType Dim method As MethodInfo = t.GetMethod("Create") Dim obj As Object = method.Invoke(factory, Nothing) SQLDataSource = DirectCast(obj, ICRDataLayer) EDIT: Implementation based on Paul Kohler's code Dim file As String For Each file In Directory.GetFiles(baseDir, searchPattern, SearchOption.TopDirectoryOnly) Dim assemblyType As System.Type For Each assemblyType In Assembly.LoadFrom(file).GetTypes Dim s As System.Type() = assemblyType.GetInterfaces For Each ty As System.Type In s If ty.Name.Contains("ICRDataLayer") Then MsgBox(ty.Name) plugin = DirectCast(Activator.CreateInstance(assemblyType), ICRDataLayer) MessageBox.Show(plugin.ModuleName) End If Next I get the following error with this code: Unable to cast object of type 'SQLServer.CRDataSource.SQLServer' to type 'DynamicAssemblyLoading.ICRDataLayer'. The actual DLL is in a different project called SQLServer in the same solution as my implementation code. CRDataSource is a namespace and SQLServer is the actual class name of the DLL. The SQLServer class implements ICRDataLayer, so I don't understand why it wouldn't be able to cast it. Is the naming significant here, I wouldn't have thought it would be.

    Read the article

  • :contains for multiple words

    - by Emin
    I am using the following jQuery var etag='kate' if (etag.length > 0) { $('div').each(function () { $(this).find('ul:not(:contains(' + etag + '))').hide(); $(this).find('ul:contains(' + etag + ')').show(); }); }? towards the following HTML <div id="2"> <ul> <li>john</li> <li>jack</li> </ul> <ul> <li>kate</li> <li>clair</li> </ul> <ul> <li>hugo</li> <li>desmond</li> </ul> <ul> <li>said</li> <li>jacob</li> </ul> </div> <div id="3"> <ul> <li>jacob</li> <li>me</li> </ul> <ul> <li>desmond</li> <li>george</li> </ul> <ul> <li>allen</li> <li>kate</li> </ul> <ul> <li>salkldf</li> <li>3kl44</li> </ul> </div> basically, as long as etag has one word, the code works perfectly and hides those elements who do not contain etag. My problem is, when etag is multiple words (and I don't have control over it. Its coming from a database and could be combination of multiple words seperated with space char) then the code does not work.. is there any way to achieve this?

    Read the article

  • Optimizing tasks to reduce CPU in a trading application

    - by Joel
    Hello, I have designed a trading application that handles customers stocks investment portfolio. I am using two datastore kinds: Stocks - Contains unique stock name and its daily percent change. UserTransactions - Contains information regarding a specific purchase of a stock made by a user : the value of the purchase along with a reference to Stock for the current purchase. db.Model python modules: class Stocks (db.Model): stockname = db.StringProperty(multiline=True) dailyPercentChange=db.FloatProperty(default=1.0) class UserTransactions (db.Model): buyer = db.UserProperty() value=db.FloatProperty() stockref = db.ReferenceProperty(Stocks) Once an hour I need to update the database: update the daily percent change in Stocks and then update the value of all entities in UserTransactions that refer to that stock. The following python module iterates over all the stocks, update the dailyPercentChange property, and invoke a task to go over all UserTransactions entities which refer to the stock and update their value: Stocks.py # Iterate over all stocks in datastore for stock in Stocks.all(): # update daily percent change in datastore db.run_in_transaction(updateStockTxn, stock.key()) # create a task to update all user transactions entities referring to this stock taskqueue.add(url='/task', params={'stock_key': str(stock.key(), 'value' : self.request.get ('some_val_for_stock') }) def updateStockTxn(stock_key): #fetch the stock again - necessary to avoid concurrency updates stock = db.get(stock_key) stock.dailyPercentChange= data.get('some_val_for_stock') # I get this value from outside ... some more calculations here ... stock.put() Task.py (/task) # Amount of transaction per task amountPerCall=10 stock=db.get(self.request.get("stock_key")) # Get all user transactions which point to current stock user_transaction_query=stock.usertransactions_set cursor=self.request.get("cursor") if cursor: user_transaction_query.with_cursor(cursor) # Spawn another task if more than 10 transactions are in datastore transactions = user_transaction_query.fetch(amountPerCall) if len(transactions)==amountPerCall: taskqueue.add(url='/task', params={'stock_key': str(stock.key(), 'value' : self.request.get ('some_val_for_stock'), 'cursor': user_transaction_query.cursor() }) # Iterate over all transaction pointing to stock and update their value for transaction in transactions: db.run_in_transaction(updateUserTransactionTxn, transaction.key()) def updateUserTransactionTxn(transaction_key): #fetch the transaction again - necessary to avoid concurrency updates transaction = db.get(transaction_key) transaction.value= transaction.value* self.request.get ('some_val_for_stock') db.put(transaction) The problem: Currently the system works great, but the problem is that it is not scaling well… I have around 100 Stocks with 300 User Transactions, and I run the update every hour. In the dashboard, I see that the task.py takes around 65% of the CPU (Stock.py takes around 20%-30%) and I am using almost all of the 6.5 free CPU hours given to me by app engine. I have no problem to enable billing and pay for additional CPU, but the problem is the scaling of the system… Using 6.5 CPU hours for 100 stocks is very poor. I was wondering, given the requirements of the system as mentioned above, if there is a better and more efficient implementation (or just a small change that can help with the current implemntation) than the one presented here. Thanks!! Joel

    Read the article

  • How to use Wordpress' http.php in external projects?

    - by NJTechGuy
    I am trying to parse data from a pipe-delimited text file hosted on another server which in turn will be inserted in a database. My host (1and1) disabled allow_url_fopen in php.ini I guess. Error message : Warning: fopen() [function.fopen]: URL file-access is disabled in the server configuration in Code : <? // make sure curl is installed if (function_exists('curl_init')) { // initialize a new curl resource $ch = curl_init(); // set the url to fetch curl_setopt($ch, CURLOPT_URL, 'http://abc.com/data/output.txt'); // don't give me the headers just the content curl_setopt($ch, CURLOPT_HEADER, 0); // return the value instead of printing the response to browser curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // use a user agent to mimic a browser curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.5) Gecko/20041107 Firefox/1.0'); $content = curl_exec($ch); // remember to always close the session and free all resources curl_close($ch); } else { // curl library is not installed so we better use something else } //$contents = fread ($fd,filesize ($filename)); //fclose ($fd); $delimiter = "|"; $splitcontents = explode($delimiter, $contents); $counter = ""; ?> <font color="blue" face="arial" size="4">Complete File Contents</font> <hr> <? echo $contents; ?> <br><br> <font color="blue" face="arial" size="4">Split File Contents</font> <hr> <? foreach ( $splitcontents as $color ) { $counter = $counter+1; echo "<b>Split $counter: </b> $colorn<br>"; } ?> Wordpress has this cool http.php file. Is there a better way of doing it? If not, how do I use http.php for this task? Thank you guys..

    Read the article

  • Webcast Q&A: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the second webcast in our WebCenter in Action webcast series, "Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter, where customer Michael Chander from Qualcomm and Vince Casarez & Gourav Goyal from Oracle Partner Keste shared how Oracle WebCenter is powering Qualcomm’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Mike Chandler, Qualcomm Q: Did you run into any issues when integrating all of the different applications together?A: Definitely, our main challenges were in the area of user provisioning and security propagation, all the standard stuff you might expect when hooking up SSO for authentication and authorization. In addition, we spent several iterations getting the UI’s in sync. While everyone was given the same digital material to build too, each team interpreted and implemented it their own way. Initially as a user navigated, if you were looking for it, you could slight variations in color or font or width , stuff like that. So we had to pull all the developers responsible for the UI together and get pixel level agreement on a lot of things so we could ensure seamless transitions across applications. Q: What has been the biggest benefit your end users have seen?A: Wow, there have been several. An SSO enabled environment was huge a win for our users. The portal application that this replaced had not really been invested in by the business. With this project, we had full business participation and backing, and it really showed in some key areas like the shopping experience. For example, while ordering in the previous site, the items did not have any pictures or really usable descriptions. A tremendous amount of work was done to try and make the site more intuitive and user friendly. Site performance has also drastically improved thanks to new hardware, improved database design, and of course the fact that ADF has made great strides in runtime performance. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: Within a large company, I’m sure there is always going to be competition for large projects, as there was here. Once we got through the technical analysis and settled on the technology choices, it was actually no resistance to implementing the solution. This project was fully driven by the business with the aim of long term growth. I can confidently say that the fact that this project was given the utmost importance by both the business and IT really help put down any resistance that you would typically see while implementing a new solution. Q: Given the performance, what do you estimate to be the top end capacity of the system? A:I think our top end capacity is really only limited by our hardware. I’m comfortable saying we could grow 10x on our current hardware, both in terms of transactions and users. We can easily spin up new JVM instances if needed. We already use less JVM’s than we had planned. In addition, ADF is doing a very good job with his connection pooling and application module pooling, so we see a very good ratio of users connected to the systems vs db connections, without impacting performace. Q: What's the overview or summary of feedback from the users interacting with the site?A: Feedback has been overwhelmingly positive from both the business and our customers. They’re very happy with the new SSO environment , the new LAF, and the performance of the site. Of course, it’s not all roses. No matter what, there are always going to be people that don’t like the layout or the color scheme, etc. By and large though, customers are happy and the business is happy. Q: Can you describe the impressions about the site before and after the project within Qualcomm?A: Before the project, the site worked and people were using it, but most people were not happy with it. It was slow and tended to be a bit tempermental, for example a user would perform a transaction and the system would throw and unexpected error. The user could back up and retry the steps and things would work fine, so why didn’t work the first time?. From a UI perspective, we’d hear comments like it looked like it was built by a high school student.  Vince Casarez & Gourav Goyal, Keste Q: Did you run into any obstacles when implementing the solution?A: It's interesting some people call them "obstacles" on this project we just called them "dependencies".  There were both technical and business related dependencies that we had to work out. Mike points out the SSO dependencies and the coordination and synchronization between the teams to have a seamless login experience and a seamless end user experience.  There was also a set of dependencies on the User Acceptance testing to make sure that everyone understood the use cases for how the system would be used.  With a branching into a new market and trying to match a simple user experience as many consumer sites have today, there was always a tendency for the team members to provide their suggestions on how things could be simpler.  But with all the work up front on the user design and getting the business driving this set of experiences, this minimized the downstream suggestions that tend to distract a team.  In this case, all the work up front allowed us to enumerate the "dependencies" and keep the distractions to a minimum. Q: Was there a lot of custom work that needed to be done for this particular solution?A: The focus for this particular solution was really on the custom processes. The interesting thing is that with the data flows and the integration with applications, there are some pre-built integrations, but realistically for the process flow, we had to build those. The framework and tooling we used made things easier so we didn’t have to implement core functionality, like transitioning from screen to screen or from flow to flow. The design feature of Task Flows really helped speed the development and keep the component infrastructure in line with the dynamic processes.  Task flows and other elements like Skins are core to the infrastructure or technology stack of Oracle. This then allowed the team to center the project focus around the business flows and use cases to meet the core requirements and keep the project on time. Q: What do you think were the keys to success for rolling out WebCenter?A:  The 5 main keys to success were: 1) Sponsorship from the whole organization around this project from senior executive agreement, business owners driving functionality, and IT development alignment; 2) Upfront design planning and use case definition to clearly define the project scope and requirements; 3) Focussed development and project management aligned with the top level goals and drivers; 4) User acceptance and usability testing along the way to identify potential issues and direct resolution of the issues;  and 5) Constant prioritization of the issues for development to fix by the business.  It also helps to have great team chemistry and really smart people working on the project. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter from Oracle WebCenter

    Read the article

  • Parallel.For maintain input list order on output list

    - by romeozor
    I'd like some input on keeping the order of a list during heavy-duty operations that I decided to try to do in a parallel manner to see if it boosts performance. (It did!) I came up with a solution, but since this was my first attempt at anything parallel, I'd need someone to slap my hands if I did something very stupid. There's a query that returns a list of card owners, sorted by name, then by date of birth. This needs to be rendered in a table on a web page (ASP.Net WebForms). The original coder decided he would construct the table cell-by-cell (TableCell), add them to rows (TableRow), then each row to the table. So no GridView, allegedly its performance is bad, but the performance was very poor regardless :). The database query returns in no time, the most time is spent on looping through the results and adding table cells etc. I made the following method to maintain the original order of the list: private TableRow[] ComposeRows(List<CardHolder> queryResult) { int queryElementsCount = queryResult.Count(); // array with the query's size var rowArray = new TableRow[queryElementsCount]; Parallel.For(0, queryElementsCount, i => { var row = new TableRow(); var cell = new TableCell(); // various operations, including simple ones such as: cell.Text = queryResult[i].Name; row.Cells.Add(cell); // here I'm adding the current item to it's original index // to maintain order in the output list rowArray[i] = row; }); return rowArray; } So as you can see, because I'm returning a very different type of data (List<CardHolder> -> TableRow[]), I can't just simply omit the ordering from the original query to do it after the operations. Also, I also thought it would be a good idea to Dispose() the objects at the end of each loop, because the query can return a huge list and letting cell and row objects pile up in the heap could impact performance.(?) How badly did I do? Does anyone have a better solution in case mine is flawed?

    Read the article

  • problem with custom NSProtocol and caching on iPhone

    - by TomSwift
    My iPhone app embeds a UIWebView which loads html via a custom NSProtocol handler I have registered. My problem is that resources referenced in the returned html, which are also loaded via my custom protocol handler, are cached and never reloaded. In particular, my stylesheet is cached: <link rel="stylesheet" type="text/css" href="./styles.css" /> The initial request to load the html in the UIWebView looks like this: NSString* strUrl = [NSMutableString stringWithFormat: @"myprotocol:///entry?id=%d", entryID ]; NSURL* url = [NSURL URLWithString: strUrl]; [_pCurrentWebView loadRequest: [NSURLRequest requestWithURL: url cachePolicy: NSURLRequestReloadIgnoringLocalCacheData timeoutInterval: 60 ]]; (note the cache policy is set to ignore, and I've verified this cache policy carries through to subsequent requests for page resources on the initial load) The protocol handler loads the html from a database and returns it to the client using code like this: // create the response record NSURLResponse *response = [[NSURLResponse alloc] initWithURL: [request URL] MIMEType: mimeType expectedContentLength: -1 textEncodingName: textEncodingName]; // get a reference to the client so we can hand off the data id client = [self client]; // turn off caching for this response data [client URLProtocol: self didReceiveResponse:response cacheStoragePolicy: NSURLCacheStorageNotAllowed]; // set the data in the response to our jfif data [client URLProtocol: self didLoadData:data]; [data release]; (Note the response cache policy is "not allowed"). Any ideas how I can make it NOT cache my styles.css resource? I need to be able to dynamically alter the content of this resource on subsequent loads of html that references this file. I thought clearing the shared url cache would work, but it doesnt: [[NSURLCache sharedURLCache] removeAllCachedResponses]; One thing that does work, but it's terribly inefficient, is to dynamically cache-bust the url for the stylesheet by adding a timestamp parameter: <link rel="stylesheet" type="text/css" href="./styles.css?ts=1234567890" /> To make this work I have to load my html from the db, search and replace the url for the stylesheet with a cache-busting parameter that changes on each request. I'd rather not do this. My presumption is that there is no problem if I were to load my content via the built-in HTTP protocol. In that case, I'm guessing that the UIWebView looks at any Cache-Control flags in the NSURLHTTPResponse object's http headers and abides by them. Since my NSURLResponseObject has no http headers (it's not http...) then perhaps UIWebView just decides to cached the resource (ignoring the NSURLRequest caching directive?). Ideas???

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • Create Auto Customization Criteria OAF Search Page

    - by PRajkumar
    1. Create a New Workspace and Project Right click Workspaces and click create new OAworkspace and name it as PRajkumarCustSearch. Automatically a new OA Project will also be created. Name the project as CustSearchDemo and package as prajkumar.oracle.apps.fnd.custsearchdemo   2. Create a New Application Module (AM) Right Click on CustSearchDemo > New > ADF Business Components > Application Module Name -- CustSearchAM Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   3. Enable Passivation for the Root UI Application Module (AM) Right Click on CustSearchAM > Edit SearchAM > Custom Properties > Name – RETENTION_LEVEL Value – MANAGE_STATE Click add > Apply > OK   4. Create Test Table and insert data some data in it (For Testing Purpose)   CREATE TABLE xx_custsearch_demo (   -- ---------------------     -- Data Columns     -- ---------------------     column1                  VARCHAR2(100),     column2                  VARCHAR2(100),     column3                  VARCHAR2(100),     column4                  VARCHAR2(100),     -- ---------------------     -- Who Columns     -- ---------------------     last_update_date    DATE         NOT NULL,     last_updated_by     NUMBER   NOT NULL,     creation_date          DATE         NOT NULL,     created_by               NUMBER   NOT NULL,     last_update_login   NUMBER  );   INSERT INTO xx_custsearch_demo VALUES('v1','v2','v3','v4',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v1','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v2','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v3','v4','v5','v6',SYSDATE,0,SYSDATE,0,0); Now we have 4 records in our custom table   5. Create a New Entity Object (EO) Right click on SearchDemo > New > ADF Business Components > Entity Object Name – CustSearchEO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.schema.server Database Objects -- XX_CUSTSEARCH_DEMO   Note – By default ROWID will be the primary key if we will not make any column to be primary key   Check the Accessors, Create Method, Validation Method and Remove Method   6. Create a New View Object (VO) Right click on CustSearchDemo > New > ADF Business Components > View Object Name -- CustSearchVO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   In Step2 in Entity Page select CustSearchEO and shuttle them to selected list   In Step3 in Attributes Window select columns Column1, Column2, Column3, Column4, and shuttle them to selected list   In Java page deselect Generate Java file for View Object Class: CustSearchVOImpl and Select Generate Java File for View Row Class: CustSearchVORowImpl   7. Add Your View Object to Root UI Application Module Select Right click on CustSearchAM > Application Modules > Data Model Select CustSearchVO and shuttle to Data Model list   8. Create a New Page Right click on CustSearchDemo > New > Web Tier > OA Components > Page Name -- CustSearchPG Package -- prajkumar.oracle.apps.fnd.custsearchdemo.webui   9. Select the CustSearchPG and go to the strcuture pane where a default region has been created   10. Select region1 and set the following properties: ID -- PageLayoutRN Region Style -- PageLayout AM Definition -- prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Window Title – AutoCustomize Search Page Window Title – AutoCustomization Search Page Auto Footer -- True   11. Add a Query Bean to Your Page Right click on PageLayoutRN > New > Region Select new region region1 and set following properties ID – QueryRN Region Style – query Construction Mode – autoCustomizationCriteria Include Simple Panel – False Include Views Panel – False Include Advanced Panel – False   12. Create a New Region of style table Right Click on QueryRN > New > Region Using Wizard Application Module – prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Available View Usages – CustSearchVO1   In Step2 in Region Properties set following properties Region ID – CustSearchTable Region Style – Table   In Step3 in View Attributes shuttle all the items (Column1, Column2, Column3, Column4) available in “Available View Attributes” to Selected View Attributes: In Step4 in Region Items page set style to “messageStyledText” for all items   13. Select CustSearchTable in Structure Panel and set property Width to 100%   14. Include Simple Search Panel Right Click on QueryRN > New > simpleSearchPanel Automatically region2 (header Region) and region1 (MessageComponentLayout Region) created Set Following Properties for region2 Id – SimpleSearchHeader Text -- Simple Search   15. Now right click on message Component Layout Region (SimpleSearchMappings) and create two message text input beans and set the below properties to each   Message TextInputBean1 Id – SearchColumn1 Search Allowed – True Data Type – VARCHAR2 Maximum Length – CSS Class – OraFieldText Prompt – Column1   Message TextInputBean2 Id – SearchColumn2 Search Allowed -- True Data Type – VARCHAR2 Maximum Length – 100 CSS Class – OraFieldText Prompt – Column2   16. Now Right Click on query Components and create simple Search Mappings. Then automatically SimpleSearchMappings and QueryCriteriaMap1 created   17.  Now select the QueryCriteriaMap1 and set the below properties Id – SearchColumn1Map Search Item – SearchColumn1 Result Item – Column1   18. Now again right click on simpleSearchMappings -> New -> queryCriteriaMap, and then set the below properties Id – SearchColumn2Map Search Item – SearchColumn2 Result Item – Column2   19. Congratulation you have successfully finished Auto Customization Search page. Run Your CustSearchPG page and Test Your Work            

    Read the article

  • MS Access MSChart.Graph.8 not printing

    - by Tanj
    Software: Microsoft Access 2007 SP2 Database File Version: Access 2000 I have an access program that I inherited from a previous employee. It uses forms for reports and since I don't have much experience in access I have continued to do this. I have created a copy of the program for another project and modified it to suit. I am having trouble getting more then one chart to print. All the charts display in form view, they all have the same properties (excepting data, position, etc.) For some reason they are not printing. They don't even show up in the print preview. I am thinking it must be something with the graphs themselves as they sometimes lose all information. I have to open the graphs in edit mode and change the data source from column to row and back again so that it gets redrawn. (Refresh doesn't fix it) So right now I don't even have a clue as to where to look so ideas are welcome. Edit #1 It seems to be a problem with linking to an unbound form. Subform Field Linker: Can't build a link between unbound forms. The query for the main form is SELECT tTest.ixTest, tMotorTypes.ixMotorType, tMotorTypes.asMotorType, tMotorTypes.fDeprecated, tTestType.asTest, tTest.asSerialNum, tTest.asOrderNum, tTest.asFrameNum, tTest.asRotorNum, tTest.asOperator, tTest.iStation, tTest.dtTestDate, tTest.ixTestType FROM tMotorTypes INNER JOIN (tTestType INNER JOIN tTest ON tTestType.ixTestType=tTest.ixTestType) ON tMotorTypes.ixMotorType=tTest.ixMotorType; The query for the chart is: SELECT qGraphRSTTemperatures.Frequency, qGraphRSTTemperatures.[Drive End], qGraphRSTTemperatures.[Non Drive End], qGraphRSTTemperatures.[Air In], qGraphRSTTemperatures.Core FROM qGraphRSTTemperatures ORDER BY qGraphRSTTemperatures.ixTemperature; Query qGraphRSTTemperatures: SELECT tElectricalData.dblFrequency AS Frequency, tTemperatures.dblDrvEnd AS [Drive End], tTemperatures.dblNonDrvEnd AS [Non Drive End], tTemperatures.dblAirIn AS [Air In], tTemperatures.dblCore AS Core, tSubTest.ixTest, tTemperatures.ixTemperature FROM (tSubTest INNER JOIN tElectricalData ON tSubTest.ixSubTest = tElectricalData.ixSubTest) LEFT JOIN tTemperatures ON tElectricalData.ixElectrical = tTemperatures.ixElectrical WHERE (((tSubTest.ixSubTestType)=5)) ORDER BY tSubTest.ixTest, tTemperatures.ixTemperature; So how come, in the form view it shows the graph with the correct data when linked thus: Child field: ixTest Master field: ixTest but won't print the graph. The graph will print if I remove the links, but then I have all the data from chart query as it is not limited by ixTest. edit #2 It seems to be a data retrieval/rendering issue in printing. Is there anything in printing that changes the context of records with respect to parent/child relationships?

    Read the article

  • JSF: how to update the list after delete an item of that list

    - by Harry Pham
    It will take a moment for me to explain this, so please stay with me. I have table COMMENT that has OneToMany relationship with itself. @Entity public class Comment(){ ... @ManyToOne(optional=true, fetch=FetchType.LAZY) @JoinColumn(name="REPLYTO_ID") private Comment replyTo; @OneToMany(mappedBy="replyTo", cascade=CascadeType.ALL) private List<Comment> replies = new ArrayList<Comment>(); public void addReply(NewsFeed reply){ replies.add(reply); reply.setReplyTo(this); } public void removeReply(NewsFeed reply){ replies.remove(reply); } } So you can think like this. Each comment can have a List of replies which are also type Comment. Now it is very easy for me to delete the original comment and get the updated list back. All I need to do after delete is this. allComments = myEJB.getAllComments(); //This will query the db and return updated list But I am having problem when trying to delete replies and getting the updated list back. So here is how I delete the replies. Inside my managed bean I have //Before invoke this method, I have the value of originalFeed, and deletedFeed set. //These original comments are display inside a p:dataTable X, and the replies are //displayed inside p:dataTable Y which is inside X. So when I click the delete button //I know which comment I want to delete, and if it is the replies, I will know //which one is its original post public void deleteFeed(){ if(this.deletedFeed != null){ scholarEJB.deleteFeeds(this.deletedFeed); if(this.originalFeed != null){ //Since the originalFeed is not null, this is the `replies` //that I want to delete scholarEJB.removeReply(this.originalFeed, this.deletedFeed); } feeds = scholarEJB.findAllFeed(); } } Then inside my EJB scholarEJB, I have public void removeReply(NewsFeed comment, NewsFeed reply){ comment = em.merge(comment); comment.removeReply(reply); em.persist(comment); } public void deleteFeeds(NewsFeed e){ e = em.find(NewsFeed.class, e.getId()); em.remove(e); } When I get out, the entity (the reply) get correctly removed from the database, but inside the feeds List, reference of that reply still there. It only until I log out and log back in that the reply disappear. Please help

    Read the article

  • How to populate JList with data from another JList

    - by Zhen Le
    I have a MySQL database which contains data i would like to populate into a JList in my java program. I have two JList, one which is fill with Events Title and the second is to be fill with Guest Name. What i would like is when the user click on any of the Events Title, the second JList will show all the Guest Name that belong to that Event. I have already successfully populate the first JList with all the Events Title. What I'm having trouble with is when the user click on the Events Title, the Guests Name will show twice on the second JList. How can i make it to show only once? Here is what i got so far... Java Class private JList getJListEvents() { if (jListEvents == null) { jListEvents = new JList(); Border border = BorderFactory.createTitledBorder(BorderFactory.createBevelBorder(1, Color.black, Color.black), "Events", TitledBorder.LEFT, TitledBorder.TOP); jListEvents.setBorder(border); jListEvents.setModel(new DefaultListModel()); jListEvents.setBounds(new Rectangle(15, 60, 361, 421)); Events lEvents = new Events(); lEvents.loadEvents(jListEvents); jListEvents.addListSelectionListener(new ListSelectionListener(){ public void valueChanged(ListSelectionEvent e){ EventC eventC = new EventC(); //eventC.MonitorRegDetailsInfo(jListEvents, jTextFieldEventName, jTextFieldEventVenue, jTextFieldEventDate, jTextFieldEventTime, jTextAreaEventDesc); //eventC.MonitorRegPackageInfo(jListEvents, jTextFieldBallroom, jTextFieldBallroomPrice, jTextFieldMeal, jTextFieldMealPrice, jTextFieldEntertainment, jTextFieldEntertainmentPrice); eventC.MonitorRegGuest(jListEvents, jListGuest); } }); } return jListEvents; } Controller Class public void MonitorRegGuest(JList l, JList l2){ String event = l.getSelectedValue().toString(); Events retrieveGuest = new Events(event); retrieveGuest.loadGuests(l2); } Class with all the sql statement public void loadGuests(JList l){ ResultSet rs = null; ResultSet rs2 = null; ResultSet rs3 = null; MySQLController db = new MySQLController(); db.getConnection(); String sqlQuery = "SELECT MemberID FROM event WHERE EventName = '" + EventName + "'"; try { rs = db.readRequest(sqlQuery); while(rs.next()){ MemberID = rs.getString("MemberID"); } } catch (SQLException e) { e.printStackTrace(); } String sqlQuery2 = "SELECT GuestListID FROM guestlist WHERE MemberID = '" + MemberID + "'"; try { rs2 = db.readRequest(sqlQuery2); while(rs2.next()){ GuestListID = rs2.getString("GuestListID"); } } catch (SQLException e) { e.printStackTrace(); } String sqlQuery3 = "SELECT Name FROM guestcontact WHERE GuestListID = '" + GuestListID + "'"; try { rs3 = db.readRequest(sqlQuery3); while(rs3.next()){ ((DefaultListModel)l.getModel()).addElement(rs3.getString("Name")); } } catch (SQLException e) { e.printStackTrace(); } db.terminate(); } Thanks in advance!

    Read the article

  • Complex SQL query with group by and two rows in one

    - by Ricket
    Okay, I need help. I'm usually pretty good at SQL queries but this one baffles me. By the way, this is not a homework assignment, it's a real situation in an Access database and I've written the requirements below myself. Here is my table layout. It's in Access 2007 if that matters; I'm writing the query using SQL. Id (primary key) PersonID (foreign key) EventDate NumberOfCredits SuperCredits (boolean) There are events that people go to. They can earn normal credits, or super credits, or both at one event. The SuperCredits column is true if the row represents a number of super credits earned at the event, or false if it represents normal credits. So for example, if there is an event which person 174 attends, and they earn 3 normal credits and 1 super credit at the event, the following two rows would be added to the table: ID PersonID EventDate NumberOfCredits SuperCredits 1 174 1/1/2010 3 false 2 174 1/1/2010 1 true It is also possible that the person could have done two separate things at the event, so there might be more than two columns for one event, and it might look like this: ID PersonID EventDate NumberOfCredits SuperCredits 1 174 1/1/2010 1 false 2 174 1/1/2010 2 false 3 174 1/1/2010 1 true Now we want to print out a report. Here will be the columns of the report: PersonID LastEventDate NumberOfNormalCredits NumberOfSuperCredits The report will have one row per person. The row will show the latest event that the person attended, and the normal and super credits that the person earned at that event. What I am asking of you is to write, or help me write, the SQL query to SELECT the data and GROUP BY and SUM() and whatnot. Or, let me know if this is for some reason not possible, and how to organize my data to make it possible. This is extremely confusing and I understand if you do not take the time to puzzle through it. I've tried to simplify it as much as possible, but definitely ask any questions if you give it a shot and need clarification. I'll be trying to figure it out but I'm having a real hard time with it, this is grouping beyond my experience...

    Read the article

  • New to asp.net. Need help debugging this email form.

    - by Roeland
    Hey guys, First of all, I am a php developer and most of .net is alien to me which is why I am posting here! I just migrated over a site from one set of webhosting to another. The whole site is written in .net. None of the site is database driven so most of it works, except for the contact form. The output on the site simple states there was an error with "There has been an error - please try to submit the contact form again, if you continue to experience problems, please notify our webmaster." This is just a simple message it pops out of it gets to the "catch" part of the email function. I went into web.config and changed the parameters: <emailaddresses> <add name="System" value="[email protected]"/> <add name="Contact" value="[email protected]"/> <add name="Info" value="[email protected]"/> </emailaddresses> <general> <add name="WebSiteDomain" value="hoyespharmacy.com"/> </general> Then the .cs file for contact contains the mail function EmailFormData(): private void EmailFormData() { try { StringBuilder body = new StringBuilder(); body.Append("Name" + ": " + txtName.Text + "\n\r"); body.Append("Phone" + ": " + txtPhone.Text + "\n\r"); body.Append("Email" + ": " + txtEmail.Text + "\n\r"); body.Append("Fax" + ": " + txtEmail.Text + "\n\r"); body.Append("Subject" + ": " + ddlSubject.SelectedValue + "\n\r"); body.Append("Message" + ": " + txtMessage.Text); MailMessage mail = new MailMessage(); mail.IsBodyHtml = false; mail.To.Add(new MailAddress(Settings.GetEmailAddress("System"))); mail.Subject = "Contact Us Form Submission"; mail.From = new MailAddress(Settings.GetEmailAddress("System"), Settings.WebSiteDomain); mail.Body = body.ToString(); SmtpClient smtpcl = new SmtpClient(); smtpcl.Send(mail); } catch { Utilities.RedirectPermanently(Request.Url.AbsolutePath + "?messageSent=false"); } } How do I see what the actual error is. I figure I can do something with the "catch" part of the function.. Any pointers? Thanks!

    Read the article

  • Understanding SingleTableEntityPersister n QueryLoader

    - by Iapilgrim
    Hi, I have the Hibernate model @Cache(usage = CacheConcurrencyStrategy.NONE, region = SitesConstants.CACHE_REGION) public class Node extends StatefulEntity implements Inheritable, Cloneable { private Node _parent; private List<Node> _childNodes; .. } @Cache(usage = CacheConcurrencyStrategy.NONE, region = SitesConstants.CACHE_REGION) public class Page extends Node implements Defaultable, Securable { private RootZone _rootZone; ...... @OneToOne(fetch = FetchType.LAZY) @JoinColumn(name = "root_zone_id", insertable = false, updatable = false) public RootZone getRootZone() { return _rootZone; } public void setRootZone(RootZone rootZone) { if (rootZone != null) { rootZone.setPageId(this.getId()); _rootZone = rootZone; } } I want to get all pages ( call getSiteTree), so I using this query String hpql = "SELECT n FROM Node n "; See the trace I find Page.setRootZone(RootZone) line: 155 NativeMethodAccessorImpl.invoke0(Method, Object, Object[]) line: not available [native method] NativeMethodAccessorImpl.invoke(Object, Object[]) line: 39 DelegatingMethodAccessorImpl.invoke(Object, Object[]) line: 25 Method.invoke(Object, Object...) line: 597 BasicPropertyAccessor$BasicSetter.set(Object, Object, SessionFactoryImplementor) line: 66 PojoEntityTuplizer(AbstractEntityTuplizer).setPropertyValues(Object, Object[]) line: 352 PojoEntityTuplizer.setPropertyValues(Object, Object[]) line: 232 SingleTableEntityPersister(AbstractEntityPersister).setPropertyValues(Object, Object[], EntityMode) line: 3580 TwoPhaseLoad.initializeEntity(Object, boolean, SessionImplementor, PreLoadEvent, PostLoadEvent) line: 152 QueryLoader(Loader).initializeEntitiesAndCollections(List, Object, SessionImplementor, boolean) line: 877 QueryLoader(Loader).doQuery(SessionImplementor, QueryParameters, boolean) line: 752 QueryLoader(Loader).doQueryAndInitializeNonLazyCollections(SessionImplementor, QueryParameters, boolean) line: 259 QueryLoader(Loader).doList(SessionImplementor, QueryParameters) line: 2232 QueryLoader(Loader).listIgnoreQueryCache(SessionImplementor, QueryParameters) line: 2129 QueryLoader(Loader).list(SessionImplementor, QueryParameters, Set, Type[]) line: 2124 QueryLoader.list(SessionImplementor, QueryParameters) line: 401 QueryTranslatorImpl.list(SessionImplementor, QueryParameters) line: 363 HQLQueryPlan.performList(QueryParameters, SessionImplementor) line: 196 SessionImpl.list(String, QueryParameters) line: 1149 QueryImpl.list() line: 102 QueryImpl.getResultList() line: 67 NodeDaoImpl.getSiteTree(long) line: 358 PageNodeServiceImpl.getSiteTree(long) line: 797 NativeMethodAccessorImpl.invoke0(Method, Object, Object[]) line: not available [native method] NativeMethodAccessorImpl.invoke(Object, Object[]) line: 39 DelegatingMethodAccessorImpl.invoke(Object, Object[]) line: 25 Method.invoke(Object, Object...) line: 597 AopUtils.invokeJoinpointUsingReflection(Object, Method, Object[]) line: 307 JdkDynamicAopProxy.invoke(Object, Method, Object[]) line: 198 $Proxy100.getSiteTree(long) line: not available the calling setRootZone in Page makes Hibernate issue a hit to database. I don't want this. So my question is + Why query String hpql = "SELECT n FROM Node n "; issues un-expected trace logs like above. Why the query String hpql = "SELECT n.nodename FROM Node n " not? What is the mechanism behind? Note: Im using hibernate caching level 2. In case I don't want to see that trace logs. I mean I just get Node data only. How to do ? Thanks for your help. Sorry for my bad english :( Van

    Read the article

  • Under what circumstances would a LINQ-to-SQL Entity "lose" a changed field?

    - by John Rudy
    I'm going nuts over what should be a very simple situation. In an ASP.NET MVC 2 app (not that I think this matters), I have an edit action which takes a very small entity and makes a few changes. The key portion (outside of error handling/security) looks like this: Todo t = Repository.GetTodoByID(todoID); UpdateModel(t); Repository.Save(); Todo is the very simple, small entity with the following fields: ID (primary key), FolderID (foreign key), PercentComplete, TodoText, IsDeleted and SaleEffortID (foreign key). Each of these obviously corresponds to a field in the database. When UpdateModel(t) is called, t does get correctly updated for all fields which have changed. When Repository.Save() is called, by the time the SQL is written out, FolderID reverts back to its original value. The complete code to Repository.Save(): public void Save() { myDataContext.SubmitChanges(); } myDataContext is an instance of the DataContext class created by the LINQ-to-SQL designer. Nothing custom has been done to this aside from adding some common interfaces to some of the entities. I've validated that the FolderID is getting lost before the call to Repository.Save() by logging out the generated SQL: UPDATE [Todo].[TD_TODO] SET [TD_PercentComplete] = @p4, [TD_TodoText] = @p5, [TD_IsDeleted] = @p6 WHERE ([TD_ID] = @p0) AND ([TD_TDF_ID] = @p1) AND /* Folder ID */ ([TD_PercentComplete] = @p2) AND ([TD_TodoText] = @p3) AND (NOT ([TD_IsDeleted] = 1)) AND ([TD_SE_ID] IS NULL) /* SaleEffort ID */ -- @p0: Input BigInt (Size = -1; Prec = 0; Scale = 0) [5] -- @p1: Input BigInt (Size = -1; Prec = 0; Scale = 0) [1] /* this SHOULD be 4 and in the update list */ -- @p2: Input TinyInt (Size = -1; Prec = 0; Scale = 0) [90] -- @p3: Input NVarChar (Size = 4000; Prec = 0; Scale = 0) [changing text] -- @p4: Input TinyInt (Size = -1; Prec = 0; Scale = 0) [0] -- @p5: Input NVarChar (Size = 4000; Prec = 0; Scale = 0) [changing text foo] -- @p6: Input Bit (Size = -1; Prec = 0; Scale = 0) [True] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 4.0.30319.1 So somewhere between UpdateModel(t) (where I've validated in the debugger that FolderID updated) and the output of this SQL, the FolderID reverts. The other fields all save. (Well, OK, I haven't validated SaleEffortID yet, because that subsystem isn't really ready yet, but everything else saves.) I've exhausted my own means of research on this: Does anyone know of conditions which would cause a partial entity reset (EG, something to do with long foreign keys?), and/or how to work around this?

    Read the article

  • How do display checked value in checkbox on Google plus style popup box?

    - by user946742
    After reading this post on Stackoverflow Google plus popup box when hovering over thumbnail? I was inspirted to add it on my site. I managed to do so and the script adds the contacts to my database. So far awesome! However, my problem (and also appears in the example) is it does not display the "checked" value... so the user will never know if they already added them to their list or not. Is the correct way to display checked values with PHP? Here is my html code: <ul style="list-style: none;padding:2px;"> <li style="padding:5px 2px;"> <input type="checkbox" id="Friends" name="circles" value="Friends" '.$checked1.'/> Friends </li> <li style="padding:5px 2px;"> <input type="checkbox" id="Following" name="circles" value="Following" '.$checked2.'/>Following </li> <li style="padding:5px 2px;"> <input type="checkbox" id="Family" name="circles" value="Family" '.$checked3.'/> Family </li> <li style="padding:5px 2px;"> <input type="checkbox" id="Acquaintances" name="circles" value="Acquaintances" '.$checked4.'/> Acquaintances </li> </ul> And my PHP code is: if($circle_check_friends>0) { $ckecked1='checked=""'; } else if ($circle_check_following>0) { $ckecked2='checked=""'; } else if ($circle_check_family>0) { $ckecked3='checked=""'; } else if ($circle_check_acquaintances>0) { $ckecked4='checked=""'; } else if ($circle_check_friends=0) { $ckecked1=''; } else if ($circle_check_following=0) { $ckecked2=''; } else if ($circle_check_family=0) { $ckecked3=''; } else if ($circle_check_acquaintances=0) { $ckecked4=''; } Im lost because this is not giving me the result I want... i.e. for the checked values to be displayed according to the users choice. Your help is highly appreciated Thank you all in advance George

    Read the article

  • Need help with joins in sqlalchemy

    - by Steve
    I'm new to Python, as well as SQL Alchemy, but not the underlying development and database concepts. I know what I want to do and how I'd do it manually, but I'm trying to learn how an ORM works. I have two tables, Images and Keywords. The Images table contains an id column that is its primary key, as well as some other metadata. The Keywords table contains only an id column (foreign key to Images) and a keyword column. I'm trying to properly declare this relationship using the declarative syntax, which I think I've done correctly. Base = declarative_base() class Keyword(Base): __tablename__ = 'Keywords' __table_args__ = {'mysql_engine' : 'InnoDB'} id = Column(Integer, ForeignKey('Images.id', ondelete='CASCADE'), primary_key=True) keyword = Column(String(32), primary_key=True) class Image(Base): __tablename__ = 'Images' __table_args__ = {'mysql_engine' : 'InnoDB'} id = Column(Integer, primary_key=True, autoincrement=True) name = Column(String(256), nullable=False) keywords = relationship(Keyword, backref='image') This represents a many-to-many relationship. One image can have many keywords, and one keyword can relate back to many images. I want to do a keyword search of my images. I've tried the following with no luck. Conceptually this would've been nice, but I understand why it doesn't work. image = session.query(Image).filter(Image.keywords.contains('boy')) I keep getting errors about no foreign key relationship, which seems clearly defined to me. I saw something about making sure I get the right 'join', and I'm using 'from sqlalchemy.orm import join', but still no luck. image = session.query(Image).select_from(join(Image, Keyword)).\ filter(Keyword.keyword == 'boy') I added the specific join clause to the query to help it along, though as I understand it, I shouldn't have to do this. image = session.query(Image).select_from(join(Image, Keyword, Image.id==Keyword.id)).filter(Keyword.keyword == 'boy') So finally I switched tactics and tried querying the keywords and then using the backreference. However, when I try to use the '.images' iterating over the result, I get an error that the 'image' property doesn't exist, even though I did declare it as a backref. result = session.query(Keyword).filter(Keyword.keyword == 'boy').all() I want to be able to query a unique set of image matches on a set of keywords. I just can't guess my way to the syntax, and I've spent days reading the SQL Alchemy documentation trying to piece this out myself. I would very much appreciate anyone who can point out what I'm missing.

    Read the article

  • log4j performance

    - by Bob
    Hi, I'm developing a web app, and I'd like to log some information to help me improve and observe the app. (I'm using Tomcat6) First I thought I would use StringBuilders, append the logs to them and a task would persist them into the database like every 2 minutes. Because I was worried about the out-of-the-box logging system's performance. Then I made some test. Especially with log4j. Here is my code: Main.java public static void main(String[] args) { Thread[] threads = new Thread[LoggerThread.threadsNumber]; for(int i = 0; i < LoggerThread.threadsNumber; ++i){ threads[i] = new Thread(new LoggerThread("name - " + i)); } LoggerThread.startTimestamp = System.currentTimeMillis(); for(int i = 0; i < LoggerThread.threadsNumber; ++i){ threads[i].start(); } LoggerThread.java public class LoggerThread implements Runnable{ public static int threadsNumber = 10; public static long startTimestamp; private static int counter = 0; private String name; public LoggerThread(String name) { this.name = name; } private Logger log = Logger.getLogger(this.getClass()); @Override public void run() { for(int i=0; i<10000; ++i){ log.info(name + ": " + i); if(i == 9999){ int c = increaseCounter(); if(c == threadsNumber){ System.out.println("Elapsed time: " + (System.currentTimeMillis() - startTimestamp)); } } } } private synchronized int increaseCounter(){ return ++counter; } } } log4j.properties log4j.logger.main.LoggerThread=debug, f log4j.appender.f=org.apache.log4j.RollingFileAppender log4j.appender.f.layout=org.apache.log4j.PatternLayout log4j.appender.f.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n log4j.appender.f.File=c:/logs/logging.log log4j.appender.f.MaxFileSize=15000KB log4j.appender.f.MaxBackupIndex=50 I think this is a very common configuration for log4j. First I used log4j 1.2.14 then I realized there was a newer version, so I switched to 1.2.16 Here are the figures (all in millisec) LoggerThread.threadsNumber = 10 1.2.14: 4235, 4267, 4328, 4282 1.2.16: 2780, 2781, 2797, 2781 LoggerThread.threadsNumber = 100 1.2.14: 41312, 41014, 42251 1.2.16: 25606, 25729, 25922 I think this is very fast. Don't forget that: in every cycle the run method not just log into the file, it has to concatenate strings (name + ": " + i), and check an if test (i == 9999). When threadsNumber is 10, there are 100.000 loggings and if tests and concatenations. When it is 100, there are 1.000.000 loggings and if tests and concatenations. (I've read somewhere JVM uses StringBuilder's append for concatenation, not simple concatenation). Did I missed something? Am I doing something wrong? Did I forget any factor that could decrease the performance? If these figures are correct I think, I don't have to worry about log4j's performance even if I heavily log, do I?

    Read the article

  • Workflow for statistical analysis and report writing

    - by ws
    Does anyone have any wisdom on workflows for data analysis related to custom report writing? The use-case is basically this: Client commissions a report that uses data analysis, e.g. a population estimate and related maps for a water district. The analyst downloads some data, munges the data and saves the result (e.g. adding a column for population per unit, or subsetting the data based on district boundaries). The analyst analyzes the data created in (2), gets close to her goal, but sees that needs more data and so goes back to (1). Rinse repeat until the tables and graphics meet QA/QC and satisfy the client. Write report incorporating tables and graphics. Next year, the happy client comes back and wants an update. This should be as simple as updating the upstream data by a new download (e.g. get the building permits from the last year), and pressing a "RECALCULATE" button, unless specifications change. At the moment, I just start a directory and ad-hoc it the best I can. I would like a more systematic approach, so I am hoping someone has figured this out... I use a mix of spreadsheets, SQL, ARCGIS, R, and Unix tools. Thanks! PS: Below is a basic Makefile that checks for dependencies on various intermediate datasets (w/ ".RData" suffix) and scripts (".R" suffix). Make uses timestamps to check dependencies, so if you 'touch ss07por.csv', it will see that this file is newer than all the files / targets that depend on it, and execute the given scripts in order to update them accordingly. This is still a work in progress, including a step for putting into SQL database, and a step for a templating language like sweave. Note that Make relies on tabs in its syntax, so read the manual before cutting and pasting. Enjoy and give feedback! http://www.gnu.org/software/make/manual/html%5Fnode/index.html#Top R=/home/wsprague/R-2.9.2/bin/R persondata.RData : ImportData.R ../../DATA/ss07por.csv Functions.R $R --slave -f ImportData.R persondata.Munged.RData : MungeData.R persondata.RData Functions.R $R --slave -f MungeData.R report.txt: TabulateAndGraph.R persondata.Munged.RData Functions.R $R --slave -f TabulateAndGraph.R report.txt

    Read the article

< Previous Page | 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270  | Next Page >