Search Results

Search found 2798 results on 112 pages for 'pull'.

Page 94/112 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • PHP Pass Dynamic Array name to function

    - by Brad
    How do I pass an array key to a function to pull up the right key's data? // The array <?php $var['TEST1'] = Array ( 'Description' => 'This is a Description', 'Version' => '1.11', 'fields' => Array( 'ID' => array( 'type' => 'int', 'length' =>'11', 'misc' =>'auto_increment' ), 'DATA' => array( 'type' => 'varchar', ' length' => '255' ) ); $var['TEST2'] = Array ( 'Description' =? 'This is the 2nd Description', 'Version' => '2.1', 'fields' => Array( 'ID' => array( 'type' => 'int', 'length' =>'11', 'misc' =>'auto_increment' ), 'DATA' => array( 'type' => 'varchar', ' length' => '255' ) ) // The function <?php $obj = 'TEST1'; print_r($schema[$obj]); // <-- Fives me output. But calling the function doesn't. echo buildStructure($obj); /** * @TODO to add auto_inc support */ function buildStructure($obj) { $output = ''; $primaryKey = $schema["{$obj}"]['primary key']; foreach ($schema["{$obj}"]['fields'] as $name => $tag) // #### ERROR #### Invalid argument supplied for foreach() { $type = $tag['type']; $length = $tag['length']; $default = $tag['default']; $description = $tag['description']; $length = (isset($length)) ? "({$length})" : ''; $default = ($default == NULL ) ? "NULL" : $default; $output .= "`{$name}` {$type}{$length} DEFAULT {$default} COMMENT `{$DESCRIPTION}`, "; } return $output; }

    Read the article

  • Missing tables on Scaffold Dynamic Data page

    - by Ben Amada
    I created a Linq-to-SQL DBML for the first time. I dragged and dropped all my tables over to the designer. The tables all appear in the designer.cs file. In Global.asax, I also have model.RegisterContext() with the ScaffoldAllTables = true option. The routes are also setup. I can pull up the Scaffolding page, but there's at least one table that is missing that I'm trying to get to show up. This missing table has a relationship with a child table that references it. The child table appears. When viewing data for the child table, the column that references the missing/parent table shows the numeric PK int value, rather than showing the "name". So instead of showing "Cars" it shows 1, and instead of showing "Planes" it shows 2, etc. There's another table in the DB that has the same type of structure as the missing table, and it is correctly appearing in the scaffolded tables. For this missing table, I've tried explicitly adding the ScaffoldTable attribute to no avail. Does anyone know what would cause a table like this to not appear in the list Scaffolded tables? Thanks very much.

    Read the article

  • What are the Limitations for Connecting to an Access Query in Excel

    - by thornomad
    I have an Access 2007 database that has a number of tables, some are fairly large (100,000+ records); I have created a union query to pull some of the same types of data from multiple tables into one large query for pivot table manipulation and reporting. For example: SELECT Language FROM Table1 UNION ALL SELECT Language FROM Table2 UNION ALL SELECT Language FROM Table3; This works. I found, quickly, however, that a union query will not show up when connecting to the datasource from Excel 2007. So, I created a second query to reference the union query. Like so: SELECT * FROM [The Above Union Query]; This query works and it, initially, was accessible from Excel. Time passed, I've added more data. Suddenly, when I connect to my Access database from Excel my query referencing the union has disappeared. MS Access shows no signs of an issue (data displays in Access) and my other non-union queries are showing up in Excel 2007 ... but not the one that references the union. What could be going on? Why did it disappear? I noticed if I switch some of the referenced tables in the union query to a smaller table (with less rows) all of sudden the query appears in Excel again. At least, I think that's what the difference is. I really can't put my finger on why some of the union queries won't show up and some will. Am stumped and need some guidance. Thanks.

    Read the article

  • Git repositories on shared hosting with ssh access - multiple users / one ssh account

    - by acp
    I'm part of a small team trying to start coding on a project. I've decided it's time to give git a chance (no more svn) and was trying to see if we could use our shared web hosting to deploy a "public" repository there so that we can easily push/pull to/from it and keep up-to-date with each others changes. The problem I'm having now is that we only have a single ssh account for that hosting. Having used svn in the past, I could enforce a svn username on a given pair of ssh keys, however I don't seem to be able to do something similar with git (in other words tie the ssh keypair to a specific dev). I don't mind everybody having read/write permissions everywhere, since anything that is private should stay on each others machine. Finally, solutions such as gitosis can not be used. I guess my question to you is how is accountability to git pushes given? Is it tied to the ssh account being used, or the email address given in git config? Can I create different ssh keys for every developer (for the same ssh account though), and just send them to the devs?

    Read the article

  • How to check a bool setting in my iphone app

    - by dusk
    I have a setting in Root.plist with Key = 'latestNews' of type PSToggleSwitchSpecifier and DefaultValue as a boolean that is checked. If I understand that correctly, it should = YES when I pull it in to my code. I'm trying to check that value and set an int var to pass it to my php script. What is happening is that my boolean is either nil or NO and then my int var = 0. What am I doing wrong? int latestFlag; NSUserDefaults *prefs = [NSUserDefaults standardUserDefaults]; BOOL latestNews = [prefs boolForKey:@"latestNews"]; if (latestNews) latestFlag = 1; else latestFlag = 0; NSString *urlstr = [[NSString alloc] initWithFormat:@"http://www.mysite.com/folder/iphone-test.php?latest=%d", latestFlag]; NSURL *url = [[NSURL alloc] initWithString:urlstr]; //these are auto-released NSString *ans = [NSString stringWithContentsOfURL:url]; NSArray *listItems = [ans componentsSeparatedByString:@","]; self.listData = listItems; [urlstr release]; [url release];

    Read the article

  • Inefficient 'ANY' LINQ clause

    - by Focus
    I have a query that pulls back a user's "feed" which is essentially all of their activity. If the user is logged in the query will be filtered so that the feed not only includes all of the specified user's data, but also any of their friends. The database structure includes an Actions table that holds the user that created the action and a UserFriends table which holds any pairing of friends using a FrienderId and FriendeeId column which map to UserIds. I have set up my LINQ query and it works fine to pull back the data I want, however, I noticed that the query gets turned into X number of CASE clauses in profiler where X is the number of total Actions in the database. This will obviously be horrible when the database has a user base larger than just me and 3 test users. Here's the SQL query I'm trying to achieve: select * from [Action] a where a.UserId = 'GUID' OR a.UserId in (SELECT FriendeeId from UserFriends uf where uf.FrienderId = 'GUID') OR a.UserId in (SELECT FrienderId from UserFriends uf where uf.FriendeeId = 'GUID') This is what I currently have as my LINQ query. feed = feed.Where(o => o.User.UserKey == user.UserKey || db.Users.Any(u => u.UserFriends.Any(ufr => ufr.Friender.UserKey == user.UserKey && ufr.isApproved) || db.Users.Any(u2 => u2.UserFriends.Any(ufr => ufr.Friendee.UserKey == user.UserKey && ufr.isApproved) ))); This query creates this: http://pastebin.com/UQhT90wh That shows up X times in the profile trace, once for each Action in the table. What am I doing wrong? Is there any way to clean this up?

    Read the article

  • append text to lines in a CSV file

    - by MichaelMcCabe
    This question seems to have been asked a million times around the web, but I cannot find an answer which will work for me. Basically, I have a CSV file which has a number of columns (say two). The program goes through each row in the CSV file, taking the first column value, then asks the user for the value to be placed in the second column. This is done on a handheld running Windows 6. I am developing using C#. It seems a simple thing to do. But I cant seem to add text to a line. I cant use OleDb, as System.Data.Oledb isnt in the .Net version I am using. I could use another CSV file, and when they complete each line, it writes it to another CSV file. But the problems with that are - The file thats produced at the end needs to contain EVERY line (so what if they pull the batterys out half way). And what if they go back, to continue doing this another time, how will the program know where to start back from.

    Read the article

  • Mercurial Subrepos, how to control which changeset I want to use for a subrepo?

    - by Lasse V. Karlsen
    I am reading up on subrepos, and have been running some tests locally, seems to work OK so far, but I have one question. How do I specify/control which changeset I want to use for a particular subrepo? For instance, let's say I have the following two projects: class library application o fourth commit o second commit, added a feature | | o third commit o initial commit | | o second commit |/ o initial commit Now, I want the class library as a subrepo of my application, but due to the immaturity of the longest branch (the one ending up as fourth commit), I want to temporarily use the "second commit" tip. How do I go about configuring that, assuming it is even possible? Here's a batch file that sets up the above two repos + adds the library as a subrepo. If you run the batch file, it will output: [C:\Temp] :test ... v4 As you can see from that last line there, it verifies the contents of the file in the class library, which is "v4" from the fourth commit. I'd like it to be "v2", and persist as "v2" until I'm ready to pull down a newer version from the class library repository. Can anyone tell me if it is possible to do what I want, and if so, what I need to do in order to lock my subrepo to the right changeset? Batch-file: @echo off if exist app rd /s /q app if exist lib rd /s /q lib if exist app-clone rd /s /q app-clone rem == app == hg init app cd app echo program>main.txt hg add main.txt hg commit -m "initial commit" echo program+feature1>main.txt hg commit -m "second commit, added a feature" cd .. rem == lib == hg init lib cd lib echo v1>lib.txt hg add lib.txt hg commit -m "initial commit" echo v2>lib.txt hg commit -m "second commit" hg update 0 echo v3>lib.txt hg commit -m "third commit" echo v4>lib.txt hg commit -m "fourth commit" cd .. rem == subrepos == cd app hg clone ..\lib lib echo lib = ..\lib >.hgsub hg add .hgsub hg commit -m "added subrepo" cd .. rem == clone == hg clone app app-clone type app-clone\lib\lib.txt

    Read the article

  • Looking for MSSQL Table Design Sanity Check for Profile Tables with Dynamic Columns.

    - by Code Sherpa
    I just want a general sanity check regarding database design. We are building a web system that has both Teachers and Students. Both have accounts in the system. Both have profiles in the system. My question is about the table design of those Profile tables. The Teacher profile is pretty static regarding the metadata associated with it. Each teacher has a set number of fields that exposes information about that individual (schools, degrees, etc). The students, however, are a different case. We are using a windows service to pull varying data about the students from an endless stream of excel spreadsheets. The data gets moved into our database and then the fields appear in association with the student's profile. Accordingly, each and every student may have very different fields in their profile. I originally started with the concept of three tables: Accounts ---------- AccountID TeacherProfiles ---------- TeacherProfileID AccountID SecondarySchool University YearsTeaching Etc... StudentProfiles ---------- StudentProfileID AccountID Header Value The StudentProfiles table would hold the name of the column headers from the excel spreadsheets and the associated values. I have since evolved the design a little to treat Profiles more generically per the attached ERD image. The Teacher and Student "Headers" are stored in a table called "ProfileAttributeTypes" and responses (either from the excel document or via input fields on the web form) are put in a ProfileAttributes table. This way both Student and Teacher profiles can be associated with a dynamic flow of profile fields. The "Permissions" table tells us whether we are dealing with a Student or a Teacher. Since this system is likely to grow quickly, I want to make sure the foundation is solid. Can you please provide feedback about this design and let me know if it seems sound or if you could see problems it might create and, if so, what might be a better approach? Thanks in advance.

    Read the article

  • Where are possible locations of queueing/buffering delays in Linux multicast?

    - by Matt
    We make heavy use of multicasting messaging across many Linux servers on a LAN. We are seeing a lot of delays. We basically send an enormous number of small packages. We are more concerned with latency than throughput. The machines are all modern, multi-core (at least four, generally eight, 16 if you count hyperthreading) machines, always with a load of 2.0 or less, usually with a load less than 1.0. The networking hardware is also under 50% capacity. The delays we see look like queueing delays: the packets will quickly start increasing in latency, until it looks like they jam up, then return back to normal. The messaging structure is basically this: in the "sending thread", pull messages from a queue, add a timestamp (using gettimeofday()), then call send(). The receiving program receives the message, timestamps the receive time, and pushes it in a queue. In a separate thread, the queue is processed, analyzing the difference between sending and receiving timestamps. (Note that our internal queues are not part of the problem, since the timestamps are added outside of our internal queuing.) We don't really know where to start looking for an answer to this problem. We're not familiar with Linux internals. Our suspicion is that the kernel is queuing or buffering the packets, either on the send side or the receive side (or both). But we don't know how to track this down and trace it. For what it's worth, we're using CentOS 4.x (RHEL kernel 2.6.9).

    Read the article

  • Recursive code Sorting in VB

    - by Peter
    Ages old question: You have 2 hypothetical eggs, and a 100 story building to drop them from. The goal is to have the least number of guaranteed drops that will ensure you can find what floor the eggs break from the fall. You can only break 2 eggs. Using a 14 drop minimum method, I need help writing code that will allow me to calculate the following: Start with first drop attempt on 14th floor. If egg breaks then drop floors 1-13 to find the floor that causes break. ElseIf egg does not break then move up 13 floors to floor number 27 and drop again. If egg breaks then drop floors 15-26 starting on 15 working up to find the floor egg breaks on. ElseIf egg does not break then move up 12 floors to floor number 39 and drop again. etc. etc. The way this increases is as follows 14+13+12+11+10+9+8+7+6+5+4+3+2+1 So always adding to the previous value, by one less. I have never written a sorting algorithm before, and was curious how I might go about setting this up in a much more efficient way than a mile long of if then statements. My original idea was to store values for the floors in an array, and pull from that, using the index to move up or down and subtract or add to the variables. The most elegant solution would be a recursive function that handled this for any selected floor, 1-100, and ran the math, with an output that shows how many drops were needed in order to find that floor. Maximum is always 14, but some can be done in less.

    Read the article

  • AngularJS not validating email field in form

    - by idipous
    I have the html below where I have a form that I want to submit to the AngularJS Controller. <div class="newsletter color-1" id="subscribe" data-ng-controller="RegisterController"> <form name="registerForm"> <div class="col-md-6"> <input type="email" placeholder="[email protected]" data-ng-model="userEmail" required class="subscribe"> </div> <div class="col-md-2"> <button data-ng-click="register()" class="btn btn-primary pull-right btn-block">Subsbcribe</button> </div> </form> </div> And the controller is below app.controller('RegisterController', function ($scope,dataFactory) { $scope.users = dataFactory.getUsers(); $scope.register = function () { var userEmail = $scope.userEmail; dataFactory.insertUser(userEmail); $scope.userEmail = null; $scope.ThankYou = "Thank You!"; } }); The problem is that no validation is taking place when I click the button. It is always routed to the controller although I do not supply a correct email. So every time I click the button I get the {{ThankYou}} variable displayed. Maybe I do not understand something.

    Read the article

  • Evenly distribute data into columns with JavaScript

    - by marius.cdm
    I'm looking for a way to evenly distribute my JSON data into HTML columns. Using javascript to pull the data $.ajax({ url: "url", dataType: 'json', data: "e="+escape(divID), cache: true, success: function(data) { var items = data; // ??? $('.result').html(list); } }); Input data: ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K"] Expected result: <ul> <li>A</li> <li>B</li> <li>C</li> <li>D</li> </ul> <ul> <li>E</li> <li>F</li> <li>G</li> <li>H</li> </ul> <ul> <li>I</li> <li>J</li> <li>K</li> </ul> I found a partial result here, but the output data is in console. Any help would be appreciated.

    Read the article

  • Transfering a set with a Wildcarded Generic to a List in Java

    - by Daniel Bingham
    I have a data type that contains a set and a method that expects List<? extends MyClass>. The data type has Set<? extends MyClass>. I need to be able to move the stuff out of the set and into the List. The order it goes into the list doesn't matter, it just needs to start keeping track of it so that it can be reordered when displayed. Suffice to say that changing the Set into a List in the data type is out of the question here. This seems pretty easy at first. Create a new method that takes a Set instead of a List, changes it into a list and then passes it on to the old method that just took a list. The problem comes in changing the set to a list. public void setData(Set<? extends MyClass> data) { List<? extends Myclass> newData = ArrayList< /* What goes here? */ >(); for(ConcordaEntityBean o : data) { newData.add(o); } setData(newData); } Obviously, I can't instantiate an ArrayList with a wildcard, it chokes. I don't know the type at that point. Is there some way to pull the type out of data and pass it to ArrayList? Can I just instantiate it with MyClass? Is there some other way to do this?

    Read the article

  • Etiquette: Version bump my fork of opensource project?

    - by Ross
    This question is about etiquette and open source projects. I have forked an application from github and added two new features. The first feature has been request frequently elsewhere. I have added it. Code & implementation are clean (I think). The second feature is more of a hack. It will be of use to others, but the implementation is a little dirty in useage and more so in code. I need the feature but I don't have the skills to fully implement it properly or to a level that could be considered a worth while contrabution to the main project. How should the versioning work? Do I just bump up my version numbers care-free and push to my master branch? It is annoying to know which version is running, modifed or original, as both have the same version number. But will it be confusing when, months later, my github page has a version number the same as the original but both are actually completely different. (I have made pull requests etc. but that is not the context of my question.) The project I have forked uses ruby jeweler so has a versioning format of: Jeweler tracks the version of your project. It assumes you will be using a version in the format x.y.z. x is the 'major' version, y is the 'minor' version, and z is the patch version. Is this standard for other projects/langauges too? Are my changes patches? Thanks

    Read the article

  • Java/Android get array from xml

    - by Ashley
    I have a list of longitude and longitude points in an xml file that is used throughout my application. I find my self repeating this code to get points often and think there must be a better way? String[] mTempArray = getResources().getStringArray(R.array.stations); int len = mTempArray.length; mStationArray = new ArrayList<Station>(); for(int i = 0; i < len; i++){ Station s = new Station(); String[] fields = mTempArray[i].split("[\t ]"); s.setValuesFromArray(fields); Log.i("ADD STATION", ""+s); mStationArray.add(s); } XML is in the format of: <?xml version="1.0" encoding="utf-8"?> <resources> <array name="stations"> <item> <name>Station name</name> <longitude>1111111</longitude> <latitude>11111</latitude> <code>1</code> </item> And another (possible) problem is that to get just one station I have to get all of them and pull the one I want from the array. Is this going to be considerably slower? Can I make this array consistent throughout the app? (But keeping the separate Intent methodology)

    Read the article

  • Getting Response From Jquery JSON

    - by Howdy_McGee
    I'm having trouble getting a response from my php jquery / json / ajax. I keep combining all these different tutorials together but I still can't seem to pull it all together since no one tutorial follow what I'm trying to do. Right now I'm trying to pass two arrays (since there's no easy way to pass associative arrays) to my jquery ajax function and just alert it out. Here's my code: PHP $names = array('john doe', 'jane doe'); $ids = array('123', '223'); $data['names'] = $names; $data['ids'] = $ids; echo json_encode($data); Jquery function getList(){ $.ajax({ type: "GET", url: 'test.php', data: "", complete: function(data){ var test = jQuery.parseJSON(data); alert(test.names[0]); alert("here"); } }, "json"); } getList(); In my html file all I'm really calling is my javascript file for debugging purposes. I know i'm returning an object but I'm getting an error with null values in my names section, and i'm not sure why. What am I missing? My PHP file returns {"names":["john doe","jane doe"],"ids":["123","223"]} It seems to be just ending here Uncaught TypeError: Cannot read property '0' of undefined so my sub0 is killing me.

    Read the article

  • Subversion Partial Export

    - by Jared
    I have somewhat interesting development situation. The client and deployment server are inside a firewall without access to the Subversion server. But the developers are outside the firewall and are able to use the Subversion server. Right now the solution I have worked out is to update my local copy of the code and then pull out the most recently updated files using UnleashIT. The question is how to get just the updated files out of Subversion so that they can be physically transported through the firewall and put on the deployment server. I'm not worried about trying to change the firewall setup or trying to figure out an easier way to get to the Subversion server from inside the firewall. I'm just interested in a way to get a partial export from the repository of the most recently changed files. Are there any other suggestions? Answer found: In addition to the answer I marked as Answer, I've also found the following here to be able to do this from TortoiseSVN: from http://svn.haxx.se/tsvn/archive-2006-08/0051.shtml * select the two revisions * right-click, "compare revisions" * select all files in the list * right-click, choose "export to..."

    Read the article

  • TCP/IP RST being sent differently in different browsers.

    - by Brian
    On Mac OS X (10.6), if I start a YouTube video download and pull the Ethernet cable for 5 or so seconds, then plug it back in, I get varying results depending on the browser. With Opera and Chrome, after I plug the cable back in the video continues to load. But with Safari and Firefox, it never does. Using Wireshark to look at the traffic, I found that Opera and Chrome simply ACK the first packet from YouTube after the cable has been plugged back in, but Safari and Firefox set the RST flag (0x4) in the TCP header and no more traffic follows. I can put a HUB in between the machine and the internet connection, the problem goes away and all four browsers continue loading the video when the cable is plugged back into the HUB. Again, looking at the Wireshark logs, it's evident that the machine doesn't see the Mulitcast connection close and there is simply a delay in the packets flowing through. So it seems that if Safari and Firefox sees a Multicast connection close, and then later see data on that same connection, they will send a RST. My question is why? What is the correct course of action, and why are 2/4 browsers doing it one way, while the other 2/4 are doing it another way? Is there somewhere in the code that I can see where this is happening in Firefox, for instance? Thank you very much.

    Read the article

  • What can cause a spontaneous EPIPE error without either end calling close() or crashing?

    - by Hongli
    I have an application that consists of two processes (let's call them A and B), connected to each other through Unix domain sockets. Most of the time it works fine, but some users report the following behavior: A sends a request to B. This works. A now starts reading the reply from B. B sends a reply to A. The corresponding write() call returns an EPIPE error, and as a result B close() the socket. However, A did not close() the socket, nor did it crash. A's read() call returns 0, indicating end-of-file. A thinks that B prematurely closed the connection. Users have also reported variations of this behavior, e.g.: A sends a request to B. This works partially, but before the entire request is sent A's write() call returns EPIPE, and as a result A close() the socket. However B did not close() the socket, nor did it crash. B reads a partial request and then suddenly gets an EOF. The problem is I cannot reproduce this behavior locally at all. I've tried OS X and Linux. The users are on a variety of systems, mostly OS X and Linux. Things that I've already tried and considered: Double close() bugs (close() is called twice on the same file descriptor): probably not as that would result in EBADF errors, but I haven't seen them. Increasing the maximum file descriptor limit. One user reported that this worked for him, the rest reported that it did not. What else can possibly cause behavior like this? I know for certain that neither A nor B close() the socket prematurely, and I know for certain that neither of them have crashed because both A and B were able to report the error. It is as if the kernel suddenly decided to pull the plug from the socket for some reason.

    Read the article

  • Most efficient way to LIMIT results in a JOIN?

    - by johnnietheblack
    I have a fairly simple one-to-many type join in a MySQL query. In this case, I'd like to LIMIT my results by the left table. For example, let's say I have an accounts table and a comments table, and I'd like to pull 100 rows from accounts and all the associated comments rows for each. Thy only way I can think to do this is with a sub-select in in the FROM clause instead of simply selecting FROM accounts. Here is my current idea: SELECT a.*, c.* FROM (SELECT * FROM accounts LIMIT 100) a LEFT JOIN `comments` c on c.account_id = a.id ORDER BY a.id However, whenever I need to do a sub-select of some sort, my intermediate level SQL knowledge feels like it's doing something wrong. Is there a more efficient, or faster, way to do this, or is this pretty good? By the way... This might be the absolute simplest way to do this, which I'm okay with as an answer. I'm simply trying to figure out if there IS another way to do this that could potentially compete with the above statement in terms of speed.

    Read the article

  • Advice on Minimizing Stored Procedure Parameters

    - by RPM1984
    Hi Guys, I have an ASP.NET MVC Web Application that interacts with a SQL Server 2008 database via Entity Framework 4.0. On a particular page, i call a stored procedure in order to pull back some results based on selections on the UI. Now, the UI has around 20 different input selections, ranging from a textbox, dropdown list, checkboxes, etc. Each of those inputs are "grouped" into logical sections. Example: Search box : "Foo" Checkbox A1: ticked, Checkbox A2: unticked Dropdown A: option 3 selected Checkbox B1: ticked, Checkbox B2: ticked, Checkbox B3: unticked So i need to call the SPROC like this: exec SearchPage_FindResults @SearchQuery = 'Foo', @IncludeA1 = 1, @IncludeA2 = 0, @DropDownSelection = 3, @IncludeB1 = 1, @IncludeB2 = 1, @IncludeB3 = 0 The UI is not too important to this question - just wanted to give some perspective. Essentially, i'm pulling back results for a search query, filtering these results based on a bunch of (optional) selections a user can filter on. Now, My questions/queries: What's the best way to pass these parameters to the stored procedure? Are there any tricks/new ways (e.g SQL Server 2008) to do this? Special "table" parameters/arrays - can we pass through User-Defined-Types? Keep in mind im using Entity Framework 4.0 - but could always use classic ADO.NET for this if required. What about XML? What are the serialization/de-serialization costs here? Is it worth it? How about a parameter for each logical section? Comma-seperated perhaps? Just thinking out loud. This page is particulary important from a user point of view, and needs to perform really well. The stored procedure is already heavy in logic, so i want to minimize the performance implications - so keep that in mind. With that said - what is the best approach here?

    Read the article

  • select nodes from a line of xml code with sql

    - by wondergoat77
    I have a table that stores a huge line/entire document of xml like this: <?xml version="1.0" encoding="utf-16"?> <RealQuestResponse xmlns:xsi="http://www.w3.org /2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Success>true</Success> <Subject> <AmbiguousMatches /> <Assessment> <LandValue>0</LandValue> <ImprovementsValue>0</ImprovementsValue> <TotalValue>0</TotalValue> </Assessment> <RecentSales /> <Warnings> <Score>0</Score> <TrusteesDeedRatio>0</Tr........etc Is there a way to pull any of these fields out of the xml? it is stored in a column in a table called AutomatedRequests That table looks like this: requestid Provider Date Success Response 1 test 1/2/2012 Y <?xml version..... <---this is the xml code stored> Ive seen a couple ways but nothing like this Id basically like something like select xmlnode1, xmlnode2, xmlnode3 from automatedrequests have tried this but not working: select xml.query('RealQuestResponse/Bedrooms/*') from automatedRequests where orderid = 1266162

    Read the article

  • Parallel doseq for Clojure

    - by andrew cooke
    I haven't used multithreading in Clojure at all so am unsure where to start. I have a doseq whose body can run in parallel. What I'd like is for there always to be 3 threads running (leaving 1 core free) that evaluate the body in parallel until the range is exhausted. There's no shared state, nothing complicated - the equivalent of Python's multiprocessing would be just fine. So something like: (dopar 3 [i (range 100)] ; repeated 100 times in 3 parallel threads... ...) Where should I start looking? Is there a command for this? A standard package? A good reference? So far I have found pmap, and could use that (how do I restrict to 3 at a time? looks like it uses 32 at a time - no, source says 2 + number of processors), but it seems like this is a basic primitive that should already exist somewhere. clarification: I really would like to control the number of threads. I have processes that are long-running and use a fair amount of memory, so creating a large number and hoping things work out OK isn't a good approach (example which uses a significant chunk available mem). update: Starting to write a macro that does this, and I need a semaphore (or a mutex, or an atom i can wait on). Do semaphores exist in Clojure? Or should I use a ThreadPoolExecutor? It seems odd to have to pull so much in from Java - I thought parallel programming in Clojure was supposed to be easy... Maybe I am thinking about this completely the wrong way? Hmmm. Agents?

    Read the article

  • Output columns not in destination table?

    - by lance
    SUMMARY: I need to use an OUTPUT clause on an INSERT statement to return columns which don't exist on the table into which I'm inserting. If I can avoid it, I don't want to add columns to the table to which I'm inserting. DETAILS: My FinishedDocument table has only one column. This is the table into which I'm inserting. FinishedDocument -- DocumentID My Document table has two columns. This is the table from which I need to return data. Document -- DocumentID -- Description The following inserts one row into FinishedDocument. Its OUTPUT clause returns the DocumentID which was inserted. This works, but it doesn't give me the Description of the inserted document. INSERT INTO FinishedDocument OUTPUT INSERTED.DocumentID SELECT DocumentID FROM Document WHERE DocumentID = @DocumentID I need to return from the Document table both the DocumentID and the Description of the matching document from the INSERT. What syntax do I need to pull this off? I'm thinking it's possible only with the one INSERT statement, by tweaking the OUTPUT clause (in a way I clearly don't understand)? Is there a smarter way that doesn't resemble the path I'm going down here? EDIT: SQL Server 2005

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >