Search Results

Search found 35003 results on 1401 pages for 'table variable'.

Page 431/1401 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • PHP templating challenge (optimizing front-end templates)

    - by Matt
    Hey all, I'm trying to do some templating optimizations and I'm wondering if it is possible to do something like this: function table_with_lowercase($data) { $out = '<table>'; for ($i=0; $i < 3; $i++) { $out .= '<tr><td>'; $out .= strtolower($data); $out .= '</td></tr>'; } $out .= "</table>"; return $out; } NOTE: You do not know what $data is when you run this function. Results in: <table> <tr><td><?php echo strtolower($data) ?></td></tr> <tr><td><?php echo strtolower($data) ?></td></tr> <tr><td><?php echo strtolower($data) ?></td></tr> </table> General Case: Anything that can be evaluated (compiled) will be. Any time there is an unknown variable, the variable and the functions enclosing it, will be output in a string format. Here's one more example: function capitalize($str) { return ucwords(strtolower($str)); } If $str is "HI ALL" then the output is: Hi All If $str is unknown then the output is: <?php echo ucwords(strtolower($str)); ?> In this case it would be easier to just call the function (ie. <?php echo capitalize($str) ?> ), but the example before would allow you to precompile your PHP to make it more efficient

    Read the article

  • Please help optimizing a long running query (left outer join, with 2 subqueries)

    - by 46and2
    Hi all. The query I need help with is: SELECT d.bn, d.4700, d.4500, ... , p.`Activity Description` FROM ( SELECT temp.bn, temp.4700, temp.4500, .... FROM `tdata` temp GROUP BY temp.bn HAVING (COUNT(temp.bn) = 1) ) d LEFT OUTER JOIN ( SELECT temp2.bn, max(temp2.FPE) AS max_fpe, temp2.`Activity Description` FROM `pdata` temp2 GROUP BY temp2.bn ) p ON p.bn = d.bn; The ... represents other fields that aren't really important to solving this problem. The issue is on the the second subquery - it is not using the index I have created and I am not sure why, it seems to be because of the way TEXT fields are handled. The first subquery uses the index I have created and runs quite snappy, however an explain on the second shows a 'Using temporary; Using filesort'. Please see the indexes I have created in the below table create statements. Can anyone help me optimize this? By way of quick explanation the first subquery is meant to only select records that have unique bn's, the second, while it looks a bit wacky (with the max function there which is not being used in the result set) is making sure that only one record from the right part of the join is included in the result set. My table create statements are CREATE TABLE `tdata` ( `BN` varchar(15) DEFAULT NULL, `4000` varchar(3) DEFAULT NULL, `5800` varchar(3) DEFAULT NULL, .... KEY `BN` (`BN`), KEY `idx_t3010`(`BN`,`4700`,`4500`,`4510`,`4520`,`4530`,`4570`,`4950`,`5000`,`5010`,`5020`,`5050`,`5060`,`5070`,`5100`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 CREATE TABLE `pdata` ( `BN` varchar(15) DEFAULT NULL, `FPE` datetime DEFAULT NULL, `Activity Description` text, .... KEY `BN` (`BN`), KEY `idx_programs_2009` (`BN`,`FPE`,`Activity Description`(100)) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 Thanks!

    Read the article

  • Trying to insert a row using stored procedured with a parameter binded to an expression.

    - by Arvind Singh
    Environment: asp.net 3.5 (C# and VB) , Ms-sql server 2005 express Tables Table:tableUser ID (primary key) username Table:userSchedule ID (primary key) thecreator (foreign key = tableUser.ID) other fields I have created a procedure that accepts a parameter username and gets the userid and inserts a row in Table:userSchedule Problem: Using stored procedure with datalist control to only fetch data from the database by passing the current username using statement below works fine protected void SqlDataSourceGetUserID_Selecting(object sender, SqlDataSourceSelectingEventArgs e) { e.Command.Parameters["@CurrentUserName"].Value = Context.User.Identity.Name; } But while inserting using DetailsView it shows error Procedure or function OASNewSchedule has too many arguments specified. I did use protected void SqlDataSourceCreateNewSchedule_Selecting(object sender, SqlDataSourceSelectingEventArgs e) { e.Command.Parameters["@CreatedBy"].Value = Context.User.Identity.Name; } DetailsView properties: autogen fields: off, default mode: insert, it shows all the fields that may not be expected by the procedure like ID (primary key) not required in procedure and CreatedBy (user id ) field . So I tried removing the 2 fields from detailsview and shows error Cannot insert the value NULL into column 'CreatedBy', table 'D:\OAS\OAS\APP_DATA\ASPNETDB.MDF.dbo.OASTest'; column does not allow nulls. INSERT fails. The statement has been terminated. For some reason parameters value is not being set. Can anybody bother to understand this and help?

    Read the article

  • Dependecy Injection with Massive ORM: dynamic trouble

    - by Sergi Papaseit
    I've started working on an MVC 3 project that needs data from an enormous existing database. My first idea was to go ahead and use EF 4.1 and create a bunch of POCO's to represent the tables I need, but I'm starting to think the mapping will get overly complicated as I only need some of the columns in some of the tables. (thanks to Steven for the clarification in the comments. So I thought I'd give Massive ORM a try. I normally use a Unit of Work implementation so I can keep everything nicely decoupled and can use Dependency Injection. This is part of what I have for Massive: public interface ISession { DynamicModel CreateTable<T>() where T : DynamicModel, new(); dynamic Single<T>(string where, params object[] args) where T : DynamicModel, new(); dynamic Single<T>(object key, string columns = "*") where T : DynamicModel, new(); // Some more methods supported by Massive here } And here's my implementation of the above interface: public class MassiveSession : ISession { public DynamicModel CreateTable<T>() where T : DynamicModel, new() { return new T(); } public dynamic Single<T>(string where, params object[] args) where T: DynamicModel, new() { var table = CreateTable<T>(); return table.Single(where, args); } public dynamic Single<T>(object key, string columns = "*") where T: DynamicModel, new() { var table = CreateTable<T>(); return table.Single(key, columns); } } The problem comes with the First(), Last() and FindBy() methods. Massive is based around a dynamic object called DynamicModel and doesn't define any of the above method; it handles them through a TryInvokeMethod() implementation overriden from DynamicObject instead: public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { } I'm at a loss on how to "interface" those methods in my ISession. How could my ISession provide support for First(), Last() and FindBy()? Put it another way, how can I use all of Massive's capabilities and still be able to decouple my classes from data access?

    Read the article

  • Linq is returning too many results when joined

    - by KallDrexx
    In my schema I have two database tables. relationships and relationship_memberships. I am attempting to retrieve all the entries from the relationship table that have a specific member in it, thus having to join it with the relationship_memberships table. I have the following method in my business object: public IList<DBMappings.relationships> GetRelationshipsByObjectId(int objId) { var results = from r in _context.Repository<DBMappings.relationships>() join m in _context.Repository<DBMappings.relationship_memberships>() on r.rel_id equals m.rel_id where m.obj_id == objId select r; return results.ToList<DBMappings.relationships>(); } _Context is my generic repository using code based on the code outlined here. The problem is I have 3 records in the relationships table, and 3 records in the memberships table, each membership tied to a different relationship. 2 membership records have an obj_id value of 2 and the other is 3. I am trying to retrieve a list of all relationships related to object #2. When this linq runs, _context.Repository<DBMappings.relationships>() returns the correct 3 records and _context.Repository<DBMappings.relationship_memberships>() returns 3 records. However, when the results.ToList() executes, the resulting list has 2 issues: 1) The resulting list contains 6 records, all of type DBMappings.relationships(). Upon further inspection there are 2 for each real relationship record, both are an exact copy of each other. 2) All relationships are returned, even if m.obj_id == 3, even though objId variable is correctly passed in as 2. Can anyone see what's going on because I've spent 2 days looking at this code and I am unable to understand what is wrong. I have joins in other linq queries that seem to be working great, and my unit tests show that they are still working, so I must be doing something wrong with this. It seems like I need an extra pair of eyes on this one :)

    Read the article

  • mysql subselect alternative

    - by Arnold
    Hi, Lets say I am analyzing how high school sports records affect school attendance. So I have a table in which each row corresponds to a high school basketball game. Each game has an away team id and a home team id (FK to another "team table") and a home score and an away score and a date. I am writing a query that matches attendance with this seasons basketball games. My sample output will be (#_students_missed_class, day_of_game, home_team, away_team, home_team_wins_this_season, away_team_wins_this_season) I now want to add how each team did the previous season to my analysis. Well, I have their previous season stored in the game table but i should be able to accomplish that with a subselect. So in my main select statement I add the subselect: SELECT COUNT(*) FROM game_table WHERE game_table.date BETWEEN 'start of previous season' AND 'end of previous season' AND ( (game_table.home_team = team_table.id AND game_table.home_score > game_table.away_score) OR (game_table.away_team = team_table.id AND game_table.away_score > game_table.home_score)) In this case team-table.id refers to the id of the home_team so I now have all their wins calculated from the previous year. This method of calculation is neither time nor resource intensive. The Explain SQL shows that I have ALL in the Type field and I am not using a Key and the query times out. I'm not sure how I can accomplish a more efficient query with a subselect. It seems proposterously inefficient to have to write 4 of these queries (for home wins, home losses, away wins, away losses). I am sure this could be more lucid. I'll absolutely add color tomorrow if anyone has questions

    Read the article

  • Need a SQL statement focus on combination of tables but entries always with uinque ID

    - by Registered User KC
    Hi All, I need SQL code to solve the tables combination problem, described on below: Table old data: name version status lastupdate ID A 0.1 on 6/8/2010 1 B 0.1 on 6/8/2010 2 C 0.1 on 6/8/2010 3 D 0.1 on 6/8/2010 4 E 0.1 on 6/8/2010 5 F 0.1 on 6/8/2010 6 G 0.1 on 6/8/2010 7 Table new data: name version status lastupdate ID A 0.1 on 6/18/2010 #B entry deleted C 0.3 on 6/18/2010 #version_updated C1 0.1 on 6/18/2010 D 0.1 on 6/18/2010 E 0.1 off 6/18/2010 #status_updated F 0.1 on 6/18/2010 G 0.1 on 6/18/2010 H 0.1 on 6/18/2010 #new_added H1 0.1 on 6/18/2010 #new_added the difference of new data and old date: B entry deleted C entry version updated E entry status updated C1/H/H1 entry new added What I want is always keeping the ID - name mapping relationship in old data table no matter how data changed later, a.k.a the name always has a unique ID number bind with it. If entry has update, then update the data, if entry is new added, insert to the table then give a new assigned unique ID. However, I can only use SQL with simple select or update statement then it may too hard for me to write such code, then I hope someone with expertise can give direction, no details needed on the different of SQL variant, a standard sql code as sample is enough. Thanks in advance! Rgs KC

    Read the article

  • Hotkey to toggle checkboxes does opposite

    - by Joel Harris
    In this jsFiddle, I have a table with checkboxes in the first column. The master checkbox in the table header functions as expected toggling the state of all the checkboxes in the table when it is clicked. I have set up a hotkey for "shift-x" to toggle the master checkbox. The desired behavior when using the hotkey is: The master checkbox is toggled The child checkboxes each have their checked state set to match the master But what is actually happening is the opposite... The child checkboxes each have their checked state set to match the master The master checkbox is toggled Here is the relevant code $(".master-select").click(function(){ $(this).closest("table").find("tbody .row-select").prop('checked', this.checked); }); function tickAllCheckboxes() { var master = $(".master-select").click(); } //using jquery.hotkeys.js to assign hotkey $(document).bind('keydown', 'shift+x', tickAllCheckboxes); This results in the child checkboxes having the opposite checked state of the master checkbox. Why is that happening? A fix would be nice, but I'm really after an explanation so I can understand what is happening.

    Read the article

  • Why are virtual methods considered early bound?

    - by AspOnMyNet
    One definition of binding is that it is the act of replacing function names with memory addresses. a) Thus I assume early binding means function calls are replaced with memory addresses during compilation process, while with late binding this replacement happens during runtime? b) Why are virtual methods also considered early bound (thus the target method is found at compile time, and code is created that will call this method)? As far as I know, with virtual methods the call to actual method is resolved only during runtime and not compile time?! thanx EDIT: 1) A a=new A(); a.M(); As far as I know, it is not known at compile time where on the heap (thus at which memory address ) will instance a be created during runtime. Now, with early binding the function calls are replaced with memory addresses during compilation process. But how can compiler replace function call with memory address, if it doesn’t know where on the heap will object a be created during runtime ( here I’m assuming the address of method a.M will also be at same memory location as a )? 2) v-table calls are neither early nor late bound. Instead there's an offset into a table of function pointers. The offset is fixed at compile time, but which table the function pointer is chosen from depends on the runtime type of the object (the object contains a hidden pointer to its v-table), so the final function address is found at runtime. But assuming the object of type T is created via reflection ( thus app doesn’t even know of existence of type T ), then how can at compile time exist an entry point for that type of object?

    Read the article

  • array or list into Oracle using cfprocparam

    - by Travis
    I have a list of values I want to insert into a table via a stored procedure. I figured I would pass an array to oracle and loop through the array but I don't see how to pass an array into Oracle. I'd pass a list but I don't see how to work with the list to turn it into an array using PL/SQL (I'm fairly new to PL/SQL). Am I approaching this the wrong way? Using Oracle 9i and CF8. TIA! EDIT Perhaps I'm thinking about this the wrong way? I'm sure I'm not doing anything new here... I figured I'd convert the list to an associative array then loop the array because Oracle doesn't seem to work well with lists (in my limited observation). I'm trying to add a product, then add records for the management team. -- product table productName = 'foo' productDescription = 'bar' ... ... etc -- The managementteam table just has the id of the product and id of the users selected from a drop down. The user IDs are passed in via a list like "1,3,6,20" How should I go about adding the records to the management team table?

    Read the article

  • How do I change Windows Service environment path

    - by rs
    I need to change thw environment variable Environment.GetEnvironmentVariable("TMP") for a Windows service in .NET 2.0 that is running with its own user account. The server is a Windows Server 2003, SP2. Can anybody tell me how to change the Windows environment variable for that user?

    Read the article

  • What is the best database structure for this scenario?

    - by Ricketts
    I have a database that is holding real estate MLS (Multiple Listing Service) data. Currently, I have a single table that holds all the listing attributes (price, address, sqft, etc.). There are several different property types (residential, commercial, rental, income, land, etc.) and each property type share a majority of the attributes, but there are a few that are unique to that property type. My question is the shared attributes are in excess of 250 fields and this seems like too many fields to have in a single table. My thought is I could break them out into an EAV (Entity-Attribute-Value) format, but I've read many bad things about that and it would make running queries a real pain as any of the 250 fields could be searched on. If I were to go that route, I'd literally have to pull all the data out of the EAV table, grouped by listing id, merge it on the application side, then run my query against the in memory object collection. This also does not seem very efficient. I am looking for some ideas or recommendations on which way to proceed. Perhaps the 250+ field table is the only way to proceed. Just as a note, I'm using SQL Server 2012, .NET 4.5 w/ Entity Framework 5, C# and data is passed to asp.net web application via WCF service. Thanks in advance.

    Read the article

  • Active User Tracking, PHP Sessions

    - by Nik
    Alright, I'm trying to work on a function for active user counting in an AJAX based application. I want to express the below SQL Query in correct syntax but I'm not sure how to write it. The desired SQL Query is below: SELECT count(*) FROM active WHERE timestamp > time() - 1800 AND nick=(a string that doesn't contain [AFK]) Now, I do understand that time() - 1800 can be assigned to a variable, but how is timestamp > variable and nick that doesn't contain a string written in SQL?

    Read the article

  • Is NSManagedObjectContext autosaved or am I looking at NSFetchedResultsController's cache?

    - by Andreas
    I'm developing an iPhone app where I use a NSFetchedResultsController in the main table view controller. I create it like this in the viewDidload of the main table view controller: NSSortDescriptor *sortDescriptorDate = [[NSSortDescriptor alloc] initWithKey:@"date" ascending:YES]; NSSortDescriptor *sortDescriptorTime = [[NSSortDescriptor alloc] initWithKey:@"start" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptorDate,sortDescriptorTime, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; [sortDescriptorDate release]; [sortDescriptorTime release]; [sortDescriptors release]; controller = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:context sectionNameKeyPath:@"date" cacheName:nil]; [fetchRequest release]; NSError *error; BOOL success = [controller performFetch:&error]; Then, in a subsequent view, I create a new object on the context: TestObject *testObject = [NSEntityDescription insertNewObjectForEntityForName:@"TestObject" inManagedObjectContext:context]; The TestObject has several related object which I create in the same way and add to the testObject using the provided add...Objects methods. Then, if before saving the context, I press cancel and go back to the main table view, nothing is shown as expected. However, if I restart the app, the object I created on the context shows in the main table view. How come? At first, I thought it was because the NSFetchedResultsController was reading from the cache, but as you can see I set this to nil just to test. Also, [context hasChanges] returns true after I restart. What am I missing here?

    Read the article

  • Android ExpandableListView From Database

    - by SterAllures
    I want to create a two level ExpandableListView from a local database. The group level I want to get the names from the database Category table category_id | name ------------------- 1 | NameOfCategory the children level I want to get the names from the List table list_id | name | foreign_category_id -------------------------------- 1 | Listname | 1 I've got a method in DatabaseDAO to get all the values from the table public List<Category> getAllCategories() { database = dbHelper.getReadableDatabase(); List<Category> categories = new ArrayList<Category>(); Cursor cursor = database.query(SQLiteHelper.TABLE_CATEGORY, null, null, null, null, null, null); cursor.moveToFirst(); while(!cursor.isAfterLast()) { Category category = cursorToCategory(cursor); categories.add(category); cursor.moveToNext(); } cursor.close(); return categories; } Now adding the names to the group level is easy I do it the following way: ArrayList<String> groups; for(Category c : databaseDao.getAllCategories()) { groups.add(c.getName()); } Now I want to add the children to the ExpandableListView in the following array ArrayList<ArrayList<ArrayList<String>>> children;. How do I get the children under the correct group name? I think it has to do somenthing with groups.indexOf() but in the list table I only have a foreign category_id and not the actual name.

    Read the article

  • Database nesting layout confusion

    - by arzon
    I'm no expert in databases and a beginner in Rails, so here goes something which kinda confuses me... Assuming I have three classes as a sample (note that no effort has been made to address any possible Rails reserved words issue in the sample). class File < ActiveRecord::Base has_many :records, :dependent => :destroy accepts_nested_attributes_for :records, :allow_destroy => true end class Record < ActiveRecord::Base belongs_to :file has_many :users, :dependent => :destroy accepts_nested_attributes_for :users, :allow_destroy => true end class User < ActiveRecord::Base belongs_to :record end Upon entering records, the database contents will appear as such. My issue is that if there are a lot of Files for the same Record, there will be duplicate record names. This will also be true if there will be multiple Records for the same user in the the Users table. I was wondering if there is a better way than this so as to have one or more files point to a single Record entry and one or more Records will point to a single User. BTW, the File names are unique. Files table: id name 1 name1 2 name2 3 name3 4 name4 Records table: id file_id record_name record_type 1 1 ForDaisy1 ... 2 2 ForDonald1 ... 3 3 ForDonald2 ... 4 4 ForDaisy1 ... Users table: id record_id username 1 1 Daisy 2 2 Donald 3 3 Donald 4 4 Daisy Is there any way to optimize the database to prevent duplication of entries, or this should really the correct and proper behavior. I spread out the database into different tables to be able to easily add new columns in the future.

    Read the article

  • Need Help with Consolidating RoR Google Map Results

    - by Kevin
    I have a project that returns geocoded results within 20 miles of the user. I want these results grouped on the map by zip code, then within the info window show the individual results. The code posted below works, but for some reason it only displays the 1.png rather than looking at the results and using the correct .png icon associated with the number. When I look at the infowindows, it displays the correct png like "/images/2.png" or "/images/5.png" but the actual image is always 1. @ziptickets = Ticket.find(:all, :origin => coords, :select => 'DISTINCT zip, lat, lng', :within => @user.distance_to_travel, :conditions => "status_id = 1") for t in @ziptickets zips = Ticket.find(:all, :conditions => ["zip = ?", t.zip]) currentzip = t.zip.to_s tixinzip = zips.size.to_s imagelocation = "/images/" + tixinzip + ".png" shadowlocation = "/images/" + tixinzip + "s.png" @map.icon_global_init(GIcon.new(:image => imagelocation, :shadow => shadowlocation, :shadow_size => GSize.new(60,40), :icon_anchor => GPoint.new(20,20), :info_window_anchor => GPoint.new(9,2)), "test") newicon = Variable.new("test") new_marker = GMarker.new([t.lat, t.lng], :icon => newicon, :title => imagelocation, :info_window => currentzip) @map.overlay_init(new_marker) end I tried changing the last part of the mapicon from: :info_window_anchor => GPoint.new(9,2)), "test") newicon = Variable.new("test") to: :info_window_anchor => GPoint.new(9,2)), currentzip) newicon = Variable.new(currentzip) but the strangest thing is that any string that has numbers in it causes the map to fail to render in the view and just show a blank screen... same if I replace it with :info_window_anchor => GPoint.new(9,2)), "123") newicon = Variable.new("123") Any advice would be helpful... also it runs a bit slower than my previous code which just set up 4 standard icons and used them outside of the loop so any hints as to speed up execution would be appreciated greatly. Thanks!

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • Updating a local sqlite db that is used for local metadata & caching from an service?

    - by Pharaun
    I've searched through the site and haven't found a question/answer that quite answer my question, the closest one I found was: Syncing objects between two disparate systems best approach. Anyway to begun, because there is no RSS feeds available, I'm screen scrapping a webpage, hence it does a fetch then it goes through the webpage to scrap out all of the information that I'm interested in and dumps that information into a sqlite database so that I can query the information at my leisure without doing repeat fetching from the website. However I'm also storing various metadata on the data itself that is stored in the sqlite db, such as: have I looked at the data, is the data new/old, bookmark to a chunk of data (Think of it as a collection of unrelated data, and the bookmark is just a pointer to where I am in processing/reading of the said data). So right now my current problem is trying to figure out how to update the local sqlite database with new data and/or changed data from the website in a manner that is effective and straightforward. Here's my current idea: Download the page itself Create a temporary table for the parsed data to go into Do a comparison between the official and the temporary table and copy updates and/or new information to the official table This process seems kind of complicated because I would have to figure out how to determine if the data in the temporary table is new, updated, or unchanged. So I am wondering if there isn't a better approach or if anyone has any suggestion on how to architecture/structure such system?

    Read the article

  • Loading a page into memory in Rails

    - by titaniumdecoy
    My rails app produces XML when I load /reports/generate_report.xml. On a separate page, I want to read this XML into a variable and save it to the database. How can I do this? Can I somehow stream the response from the /reports/generate_report.xml URI into a variable? Or is there a better way to do it since the XML is produced by the same web app?

    Read the article

  • MVC : Checkboxes generated using JavaScript not appearing in FormCollection on postback

    - by Andy Evans
    I took over another project (written by one contractor, modified by another and now it's not working) written using MVC/C# where a view that has a table (see below) is dynamically populated using JSON/Javascript - the first column of which is a checkbox. View (spark view engine) <table id='component_list' name='component_list' cellpadding='0' border='0' cellspacing='0'> <thead> <tr> <th>&nbsp;</th> <th>Component</th> <th>Component Type</th> <th>Evenflo Part #</th> <th>Supplier Part #</th> <th>Supplier</th> <th>Requirement</th> <th>Location</th> <th>Region</th> </tr> </thead> <tbody> </tbody> </table> When the page is rendered, I look at the source for the page and do not see the table data (I wouldn't expect to see this). However, when the form is posted back, controller, the FormCollection is empty. Supposedly this had been working before the last contractor got their hands on it - which is another post all together. My goal right now is having the checkboxes in the FormCollection. Any suggestions would be greatly appreciated. Thanks,

    Read the article

  • Pass variables to current tab via chrome extension

    - by iamthejeff
    I am writing my first chrome extension, and I want to pass a variable to the currently opened tab and manipulate the DOM with it. My extension has a button, and when clicked, is executing this code: chrome.tabs.getSelected(null, function(tab) { chrome.tabs.executeScript(tab.id, { file: 'tabscript.js' }); }); This works fine, but I see no way to pass a variable to tabscript.js so it can be used on the opened tab.

    Read the article

  • mysql and trigger usage question

    - by dhruvbird
    I have a situation in which I don't want inserts to take place (the transaction should rollback) if a certain condition is met. I could write this logic in the application code, but say for some reason, it has to be written in MySQL itself (say clients written in different languages will be inserting into this MySQL InnoDB table) [that's a separate discussion]. Table definition: CREATE TABLE table1(x int NOT NULL); The trigger looks something like this: CREATE TRIGGER t1 BEFORE INSERT ON table1 FOR EACH ROW IF (condition) THEN NEW.x = NULL; END IF; END; I am guessing it could also be written as(untested): CREATE TRIGGER t1 BEFORE INSERT ON table1 FOR EACH ROW IF (condition) THEN ROLLBACK; END IF; END; But, this doesn't work: CREATE TRIGGER t1 BEFORE INSERT ON table1 ROLLBACK; You are guaranteed that: Your DB will always be MySQL Table type will always be InnoDB That NOT NULL column will always stay the way it is Question: Do you see anything objectionable in the 1st method?

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >