Search Results

Search found 33538 results on 1342 pages for 'select query'.

Page 523/1342 | < Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >

  • Using Subsonic 3.0 Advanced Templates

    - by umit
    Hi all, I've been trying to use Subsonic Advanced Templates in a project for a while but most of the time I find myself writing a Stored Procedure as I can't find a proper way of doing it in code. Subsonic created corresponding objects for my DB tables and for foreign keys it created IQueryable fields inside each object. These fields are not loaded by default and a new SQL query is executed when you access them. 1- Is there a way to get all data in one query (deep load)? Also these fields can not be assigned. So when I want to create an object in a maintenance page, I can't put all the data into this object before saving it in DB: Post post = new Post(); //get photos for this post IList<PostPhoto> postPhotos = GetPostPhotos(); post.PostPhotos = postPhotos; 2- Is it possible to have one Post object with all fields set from user input? Think of the Post object above and assume I've successfully assigned its fields. Now I need to save it to the DB. 3- Is using BatchQuery the only way to do it in one query? If I have 4 photos in PostPhotos field; 2 of them previously saved and 2 of them new, can I use the Update method to handle both the adding and updating of these photos? Any ideas or links are appreciated. Cheers...

    Read the article

  • SQL Server getdate() to a string like "2009-12-20"

    - by Adam Kane
    In Microsoft SQL Server 2005 and .NET 2.0, I want to convert the current date to a string of this format: "YYYY-MM-DD". For example, December 12th 2009 would become "2009-12-20". How do I do this in SQL. The context of this SQL statement in the table definiton. In other words, this is the default value. So when a new record is created the default value of the current date is stored as a string in the above format. I'm trying: SELECT CONVERT(VARCHAR(10), GETDATE(), 102) AS [YYYY.MM.DD] But SQL server keeps converting that to: ('SELECT CONVERT(VARCHAR(10), GETDATE(), 102) AS [YYYY.MM.DD]') so the result is just: 'SELECT CONVERT(VARCHAR(10), GETDATE(), 102) AS [YYYY.MM.DD]' Here's a screen shot of what the Visual Studio server explorer, table, table definition, properties shows: These wrapper bits are being adding automatically and converting it all to literal string: (N' ') Here's the reason I'm trying to use something other than the basic DATETIME I was using previously: This is the error I get when hooking everything to an ASP.NET GridView and try to do an update via the grid view: Server Error in '/' Application. The version of SQL Server in use does not support datatype 'date'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentException: The version of SQL Server in use does not support datatype 'date'. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [ArgumentException: The version of SQL Server in use does not support datatype 'date'.] Note: I've added a related question to try to get around the SQL Server in use does not support datatype 'date' error so that I can use a DATETIME as recommended.

    Read the article

  • How to cope with null results in SQL Tasks that return single rows in SSIS 2005?

    - by JSacksteder
    In a dataflow task, I can slip a rowcount into the processing flow and place the count into a variable. I can later use that variable to conditionally perform some other work if the rowcount was 0. This works well for me, but I have no corresponding strategy for sql tasks expected to return a single row. In that event, I'm returning those values into variables. If the lookup produces no rows, the sql task fails when assigning values into those variables. I can branch on that component failing, but there's a side effect of that - if I'm running the job as a SQL server agent job step, the step returns DTSER_FAILURE, causing the step to fail. I can tell the sql agent to disregard the step failure, but then I won't know if I have a legitimate error in that step. This seems harder than it should be. The only strategy I can think of is to run the same query with a count(*) aggregate and test if that returns a number 0 and if so running the query again without the count. That's ugly because I have the same query in two places that I need to keep in sync. Is there a better way?

    Read the article

  • Nose2 multiprocess error on Windows7

    - by tt293
    I was looking into nose2 as a way to get around the restrictions of having both xunit output and multiprocessing in nose1.3. However, when always-on is set to False in the [multiprocess] section, I can only get a single process running, while when running with always-on set to True, I get the following error: ---------------------------------------------------------------------- Ran 0 tests in 0.043s OK Traceback (most recent call last): File "C:\dev\testing\Tests\PythonTests\venv\Scripts\nose2-script.py", line 8, in <module> load_entry_point('nose2==0.4.7', 'console_scripts', 'nose2')() File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\main.py", line 284, in discover return main(*args, **kwargs) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\main.py", line 98, in __init__ super(PluggableTestProgram, self).__init__(**kw) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\unittest2-0.5.1- py2.7.egg\unittest2\main.py", line 98, in __init__ self.runTests() File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\main.py", line 260, in runTests self.result = runner.run(self.test) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\runner.py", line 53, in run executor(test, result) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\plugins\mp.py", line 60, in _runmp ready, _, _ = select.select(rdrs, [], [], self.testRunTimeout) select.error: (10038, 'An operation was attempted on something that is not a soc ket') This is running python 2.7.5 (32bit) on Windows 7 in a virtualenv with six-1.1.0, unittest2-0.5.1 and nose2-0.4.7 (I get the same behavior outside of the venv, so I don't think that is the issue here).

    Read the article

  • How does one implement storage/retrieval of smart-search/mailbox features?

    - by humble_coder
    Hi All, I have a question regarding implementation of smart-search features. For example, consider something like "smart mailboxes" in various email applications. Let's assume you have your data (emails) stored in a database and, depending on the field for which the query will be created, you present different options to the end user. At the moment let's assume the Subject, Verb, Object approach… For instance, say you have the following: SUBJECTs: message, to_address, from_address, subject, date_received VERBs: contains, does_not_contain, is_equal_to, greater_than, less_than OBJECTs: ??????? Now, in case it isn't clear, I want a table structure (although I'm not opposed to an external XMLesque file of some sort) to store (and later retrieve/present) my criteria for smart searches/mailboxes for later use. As an example, using SVO I could easily store then reconstruct a query for "date between two dates" -- simply use "date greater than" AND "date less than". However, what if, in the same smart search, I wanted a "between" OR'ed with another criterion? You can see that it might get out of hand -- not necessarily in the query creation (as that is rather simplistic), but in the option presentation and storage mechanism. Perhaps I need to think more on a more granular level. Perhaps I need to simply allow the user to select AND or OR for each entry independently instead of making it an ALL OR NOTHING type smart search (i.e. instead of MATCH ALL or MATCH ANY, I need to simply allow them to select -- I just don't want it to turn into a Hydra). Any input would be most appreciated. My apologies if the question is a bit incoherent. It is late, and I my brain is toast. Best.

    Read the article

  • How to populate Range variable from a Sub/Function call?

    - by Ken Ingram
    I am trying to get this sub to work but the operationalRange variable is not being assigned. Despite the fact that the function selectBodyRow(bodyName) works fine. Sub sortRows(bodyName As String, ByRef wksht As Worksheet) Dim operationalRange As Range Set operationalRange = selectBodyRow(bodyName) Debug.Print "Sorting Worksheet: " & wksht.Name If Not operationalRange Is Nothing Then operationalRange.Select Debug.Print "Sorting " & operationalRange.Count & "Rows." ActiveWorkbook.Worksheets(wksht.Name).Sort.SortFields.Clear ActiveWorkbook.Worksheets(wksht.Name).Sort.SortFields.Add Key:=operationalRange, _ SortOn:=xlSortOnValues, Order:=xlAscending, DataOption:=xlSortNormal ActiveWorkbook.Worksheets(wksht.Name).Sort.SortFields.Add Key:=operationalRange, _ SortOn:=xlSortOnValues, Order:=xlAscending, DataOption:=xlSortNormal With ActiveWorkbook.Worksheets(wksht.Name).Sort .SetRange operationalRange .Header = xlGuess .MatchCase = False .Orientation = xlTopToBottom .SortMethod = xlPinYin .Apply End With Else MsgBox "Body is not being Set" End If End Sub The Sub being called by the above Sub is: Function selectBodyRow(bodyName As String) As Range Dim rangeStart As String, rangeEnd As String Dim selectionStart As Range, selectionEnd As Range Dim result As Range, srchRng As Range, cngrs As Variant If bodyName = "WEST" Then rangeStart = "<-WEST START->" rangeEnd = "<-WEST END->" ElseIf bodyName = "EAST" Then rangeStart = "<-EAST START->" rangeEnd = "<-EAST END->" End If Set srchRng = Range("A:A") srchRng.Select Set selectionStart = srchRng.Find(What:=rangeStart, After:=ActiveCell, LookIn _ :=xlValues, LookAt:=xlPart, SearchOrder:=xlByRows, SearchDirection:= _ xlNext, MatchCase:=False, SearchFormat:=False) Set selectionEnd = srchRng.Find(What:=rangeEnd, After:=ActiveCell, LookIn _ :=xlValues, LookAt:=xlPart, SearchOrder:=xlByRows, SearchDirection:= _ xlNext, MatchCase:=False, SearchFormat:=False) Set result = Range(selectionStart.Offset(1, 0), selectionEnd.Offset(-1, 0)) result.EntireRow.Select End Function

    Read the article

  • jquery function

    - by kevin
    Hello id like to get the current element firing this event inside the event, Ideally I need the id, selected value and class, Thanks in advance $(document).ready(function () { $("#positions select").live("change", function () { var id = $("#category_id").val(); //var id = this.val(); alert(id); $.getJSON("/Category/GetSubCategories/" + id, function (data) { if (data.length > 0) { alert(data.length); var position = document.getElementById('positions'); var tr = position.insertRow(7); var td1 = tr.insertCell(-1); var td = tr.insertCell(-1); var sel = document.createElement("select"); sel.name = 'sel'; sel.id = 'sel'; sel.setAttribute('class', 'category'); // $("positions select").live("change", function () { //}); td.appendChild(sel); $.each(data, function (GetSubCatergories, category) { $('#sel').append($("<option></option>"). attr("value", category.category_id). text(category.name)); }); } }); }); });

    Read the article

  • PDO using singleton stored as class properity

    - by Misiur
    Hi again. OOP drives me crazy. I can't move PDO to work. Here's my DB class: class DB extends PDO { public function &instance($dsn, $username = null, $password = null, $driver_options = array()) { static $instance = null; if($instance === null) { try { $instance = new self($dsn, $username, $password, $driver_options); } catch(PDOException $e) { throw new DBException($e->getMessage()); } } return $instance; } } It's okay when i do something like this: try { $db = new DB(DB_TYPE.':host='.DB_HOST.';dbname='.DB_NAME, DB_USER, DB_PASS); } catch(DBException $e) { echo $e->getMessage(); } But this: try { $db = DB::instance(DB_TYPE.':host='.DB_HOST.';dbname='.DB_NAME, DB_USER, DB_PASS); } catch(DBException $e) { echo $e->getMessage(); } Does nothing. I mean, even when I use wrong password/username, I don't get any exception. Second thing - I have class which is "heart" of my site: class Core { static private $instance; public $db; public function __construct() { if(!self::$instance) { $this->db = DB::instance(DB_TYPE.':hpost='.DB_HOST.';dbname='.DB_NAME, DB_USER, DB_PASS); } return self::$instance; } private function __clone() { } } I've tried to use "new DB" inside class, but this: $r = $core->db->query("SELECT * FROM me_config"); print_r($r->fetch()); Return nothing. $sql = "SELECT * FROM me_config"; print_r($core->db->query($sql)); I get: PDOStatement Object ( [queryString] => SELECT * FROM me_config ) I'm really confused, what am I doing wrong?

    Read the article

  • Heavy Mysql operation & Time Constraints [closed]

    - by Rahul Jha
    There is a performance issue where that I have stuck with my application which is based on PHP & MySql. The application is for Data Migration where data has to be uploaded and after various processes (Cleaning from foreign characters, duplicate check, id generation) it has to be inserted into one central table and then to 5 different tables. There, an id is generated and that id has to be updated to central table. There are different sets of records and validation rules. The problem I am facing is that when I insert say(4K) rows file (containing 20 columns) it is working fine within 15 min it gets inserted everywhere. But, when I insert the same records again then at this time it is taking one hour to insert (ideally it should get inserted by marking earlier inserted data as duplicate). After going through the log file, I noticed is that there is a Mysql select statement where I am checking the duplicates and getting ID which are duplicates. Then I am calling a function inside for loop which is basically inserting records into 5 tables and updates id to central table. This Calling function is major time of whole process. P.S. The records has to be inserted record by record.. Kindly Suggest some solution.. //This is that sample code $query=mysql_query("SELECT DISTINCT p1.ID FROM table1 p1, table2 p2, table3 a WHERE p2.datatype =0 AND (p1.datatype =1 || p1.datatype=2) AND p2.ID =0 AND p1.ID = a.ID AND p1.coulmn1 = p2.column1 AND p1.coulmn2 = p2.coulmn2 AND a.coulmn3 = p2.column3"); $num=mysql_num_rows($query); for($i=0;$i<$num;$i++) { $f=mysql_result($query,$i,"ID"); //calling function RecordInsert($f); }

    Read the article

  • Why is my searchable activity's Intent.getAction() null?

    - by originalbryan
    I've followed the SearchManager documentation yet am still having trouble making one of my app's activities searchable. From my activity, the SearchDialog appears, I enter a query, hit search, my activity reopens, then I see this in the log: D/SearchDialog( 584): launching Intent { act=android.intent.action.SEARCH flg=0x10000000 cmp=com.clinkybot.geodroid2/.views.Waypoints (has extras) } I/SearchDialog( 584): Starting (as ourselves) #Intent;action=android.intent.action.SEARCH;launchFlags=0x10000000;component=com.clinkybot.geodroid2/.views.Waypoints;S.user_query=sdaf;S.query=sdaf;end I/ActivityManager( 584): Starting activity: Intent { act=android.intent.action.SEARCH flg=0x10000000 cmp=com.clinkybot.geodroid2/.views.Waypoints (has extras) } D/WAYPOINTS( 1018): NI Intent { cmp=com.clinkybot.geodroid2/.views.Waypoints (has extras) } D/WAYPOINTS( 1018): NI null D/WAYPOINTS( 1018): NI false It appears to me that everything is fine up until the last three lines. The "NI" lines are getIntent().toString(), getIntent().getAction(), and getIntent().hasExtra(SearchManager.QUERY) respectively. ActivityManager appears to be starting my activity with the correct action. Then when my activity starts, it contains no action!? What am I doing wrong? The relevant portion of my manifest is: <activity android:name=".views.Waypoints" android:label="Waypoints" android:launchMode="singleTop"> <intent-filter> <action android:name="android.intent.action.SEARCH" /> <category android:name="android.intent.category.DEFAULT" /> </intent-filter> <meta-data android:name="android.app.searchable" android:resource="@xml/searchable" /> </activity>

    Read the article

  • How can I improve the performance of LinqToSql queries that use EntitySet properties?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • SQL n:m Inheritance join

    - by Nightmares
    I want to join a table which contains n:m relationship between groups. (Groups are defined in a separate table). This table only has entries listing a member_group_id and a parent_group_id. Given this structure: id(int) | member_group_id(int) | parent_group_id(int) The "base" query looks like this: select p1.group_id, p2.group_id, p1.member_group_id, p2.member_group_id from group_member_group as p1 join group_member_group as p2 on p2.member_group_id = p1.member_group_id The "base" query correctly shows all relationships (I checked by doing it manually.) The problem is when I try to apply a where clause to this query to filter for a specific group as "point of origin" (the first group for which I want all parent groups) it returns only the closest parents. For example like this: select p1.group_id, p2.group_id, p1.member_group_id, p2.member_group_id from group_member_group as p1 join group_member_group as p2 on p2.member_group_id = p1.member_group_id where p1.group_id = 1 Can anyone give a clue how I can fix this? Or a different approach to realize this. (I suppose I could always do this in my C++ source code on the server side but I would have to transfer a entire table which has a high growth potential to the application server.) UPDATE: select p1.group_id, p2.group_id, p1.member_group_id, p2.member_group_id from group_member_group as p1 join group_member_group as p2 on p2.group_id = p1.member_group_id Typing mistake confirmed. Now I don't get past first level of inheritance period. Thanks at denied for pointing that out.

    Read the article

  • Unable to add FromName to e-mail using cdosys in SQL Server 2008

    - by Alex Andronov
    I have a piece of cdosys code which runs correctly and generates e-mail with my SQL Server 2008 server talking to a MS Exchange 2003 Server. However the from name is not appearing on the e-mails when they arrive. Is there a fault in the code is it not possible this way? Thanks in advance usp_send_cdosysmail @from varchar(500), @to text, @bcc text , @subject varchar(1000), @body text , @smtpserver varchar(25), @bodytype varchar(10) as declare @imsg int declare @hr int declare @source varchar(255) declare @description varchar(500) declare @output varchar(8000) exec @hr = sp_oacreate 'cdo.message', @imsg out exec @hr = sp_oasetproperty @imsg, 'configuration.fields("http://schemas.microsoft.com/cdo/configuration/sendusing").value','2' exec @hr = sp_oasetproperty @imsg, 'configuration.fields("http://schemas.microsoft.com/cdo/configuration/smtpserver").value', @smtpserver exec @hr = sp_oamethod @imsg, 'configuration.fields.update', null exec @hr = sp_oasetproperty @imsg, 'to', @to exec @hr = sp_oasetproperty @imsg, 'bcc', @bcc exec @hr = sp_oasetproperty @imsg, 'from', @from exec @hr = sp_oasetproperty @imsg, 'fromname','A From Name' exec @hr = sp_oasetproperty @imsg, 'subject', @subject -- if you are using html e-mail, use 'htmlbody' instead of 'textbody'. exec @hr = sp_oasetproperty @imsg, @bodytype, @body exec @hr = sp_oamethod @imsg, 'send', null -- sample error handling. if @hr <>0 select @hr begin exec @hr = sp_oageterrorinfo null, @source out, @description out if @hr = 0 begin select @output = ' source: ' + @source print @output select @output = ' description: ' + @description print @output end else begin print ' sp_oageterrorinfo failed.' return end end exec @hr = sp_oadestroy @imsg

    Read the article

  • What is the best way to handle the Connections to MySql from c#

    - by srk
    I am working on a c# application which connects to MySql server. There are about 20 functions which will connect to database. This application will be deployed in 200 over machines. I am using the below code to connect to my database which is identical for all the functions. The problem is, i can some connections were not closed and still alive when deployed in 200 over machines. Connection String : <add key="Con_Admin" value="server=test-dbserver; database=test_admindb; uid=admin; password=1Password; Use Procedure Bodies=false;" /> Declaration of the connection string Globally in application [Global.cs] : public static MySqlConnection myConn_Instructor = new MySqlConnection(ConfigurationSettings.AppSettings["Con_Admin"]); Function to query database : public static DataSet CheckLogin_Instructor(string UserName, string Password) { DataSet dsValue = new DataSet(); //MySqlConnection myConn = new MySqlConnection(ConfigurationSettings.AppSettings["Con_Admin"]); try { string Query = "SELECT accounts.str_nric AS Nric, accounts.str_password AS `Password`," + " FROM accounts " + " WHERE accounts.str_nric = '" + UserName + "' AND accounts.str_password = '" + Password + "\'"; MySqlCommand cmd = new MySqlCommand(Query, Global.myConn_Instructor); MySqlDataAdapter da = new MySqlDataAdapter(); if (Global.myConn_Instructor.State == ConnectionState.Closed) { Global.myConn_Instructor.Open(); } cmd.ExecuteScalar(); da.SelectCommand = cmd; da.Fill(dsValue); Global.myConn_Instructor.Close(); } catch (Exception ex) { Global.myConn_Instructor.Close(); ExceptionHandler.writeToLogFile(System.Environment.NewLine + "Target : " + ex.TargetSite.ToString() + System.Environment.NewLine + "Message : " + ex.Message.ToString() + System.Environment.NewLine + "Stack : " + ex.StackTrace.ToString()); } return dsValue; }

    Read the article

  • Iterate through every node in a XML

    - by Rachel
    Hi I am trying to iterate through every node in a xml, be it the element node, text node or comment. With the below XSL in the very first statement prints the complete xml. How do i copy the very first node in $nodes and call the template process-nodes again removing teh first node in my next iteration? <?xml version='1.0'?> <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <xsl:call-template name="process-nodes"> <xsl:with-param name="nodes" select="//node()" as="node()*"/> </xsl:call-template> </xsl:template> <xsl:template name="process-nodes"> <xsl:param name="nodes" as="node()*" /> <xsl:copy-of select="$nodes[1]"/> <xsl:if test="$nodes"> <xsl:call-template name="process-nodes"> <xsl:with-param name="nodes" select="remove($nodes, 1)" /> </xsl:call-template> </xsl:if> </xsl:template> </xsl:stylesheet> Note: I am looking for fixing the issue in this kind of implementation rather than changing the template match to <xsl:template match="@* | node()"> as I need to have some processing which requires this approach. Thanks.

    Read the article

  • Javascript XMLHttpRequest Post method

    - by user535617
    Hey So I have a question about posting using an XMLHttpRequest. In theory, if I am to post a username and password to an https domain (which I have yet to get working, unfortunately) would the responseText then change to the next website, or should the text fields become filled in? What normally happens is you navigate to this page via browser, enter a username and password, and it uses a POST method when the submit button is clicked, doing some authentication under the hood and returning a different page. I feel like maybe the responseText should even stay exactly the same (which is what happens now), but I don't know as I have no experience with this kind of thing. this.requests[1].open("POST", "https://" + this.address, true); var query = "target=%2Fcgi-bin%2FStatusConfig.cgi%3FPage%3Dindex&userfile=&username=user&password=pass&log+in=Log+in"; this.requests[1].setRequestHeader("Content-Type", "application/x-www-form-urlencoded"); this.requests[1].setRequestHeader("Content-length", query.length); this.requests[1].setRequestHeader("Keep-Alive", 115); this.requests[1].setRequestHeader("Connection", "keep-alive"); this.requests[1].setRequestHeader("Host", this.address); this.requests[1].send(query); this.requests[1].onreadystatechange = onReadyStateChange1; Then basically onReadyStateChange1 displays the responseText when ready. Any light that could be shed on what SHOULD be happening with the post and responseText would be very appreciated. As would any advice in getting the new, logged into page. For further clarification, what I'm trying to do is log in and then return the new page, because the login page displays only log in information/functionality and the page after logging in has a lot of relevant information. I'm not trying to check the credentials as much as I'm trying to get it (the script) to log in so it can access the next page. Granted, the credentials will have to be valid for that. Thanks all.

    Read the article

  • Most efficient way to bind a Listbox with SelectionMode=Multiple

    - by Draak
    Hi, I have an ASP.NET webform that has a listbox (lbxRegions) with multi-select option enabled. In my db, I have a table with an xml field that holds a list of regions. I need to populate the listbox with all available regions and then "check off" the list items that match the regions in the db table. The list options also need to be ordered by region name. So, I wrote the following code that works just fine -- no problems. But I was wondering if anyone can think of a better (more succinct, more efficient) way to have done the same thing. Thanks in advance. Dim allRegions = XElement.Load(Server.MapPath(Request.ApplicationPath) & "\Regions.xml").<country>.<regions>.<region> Dim selectedRegions = (From ev In dc.Events Where ev.EventId = 2951).Single.CEURegions.<country>.<regions>.<region> Dim unselectedRegions = allRegions.Except(selectedRegions) Dim selectedItems = From x In selectedRegions Select New ListItem() _ With {.Value = x.@code, .Text = x.Value, .Selected = True} Dim unselectedItems = From x In unselectedRegions Select New ListItem() _ With {.Value = x.@code, .Text = x.Value} Dim allItems = selectedItems.Union(unselectedItems).OrderBy(Function(x) x.Text) lbxRegions.Items.AddRange(allItems.ToArray()) P.S. You can post code in C# if you like.

    Read the article

  • How can I refactor this to work without breaking the pattern horribly?

    - by SnOrfus
    I've got a base class object that is used for filtering. It's a template method object that looks something like this. public class Filter { public void Process(User u, GeoRegion r, int countNeeded) { List<account> selected = this.Select(u, r, countNeeded); // 1 List<account> filtered = this.Filter(selected, u, r, countNeeded); // 2 if (filtered.Count > 0) { /* do businessy stuff */ } // 3 if (filtered.Count < countNeeded) this.SendToSuccessor(u, r, countNeeded - filtered) // 4 } } Select(...), Filter(...) are protected abstract methods and implemented by the derived classes. Select(...) finds objects in the based on x criteria, Filter(...) filters those selected further. If the remaining filtered collection has more than 1 object in it, we do some business stuff with it (unimportant to the problem here). SendToSuccessor(...) is called if there weren't enough objects found after filtering (it's a composite where the next class in succession will also be derived from Filter but have different filtering criteria) All has been ok, but now I'm building another set of filters, which I was going to subclass from this. The filters I'm building however would require different params and I don't want to just implement those methods and not use the params or just add to the param list the ones I need and have them not used in the existing filters. They still perform the same logical process though. I also don't want to complicated the consumer code for this (which looks like this) Filter f = new Filter1(); Filter f2 = new Filter2(); Filter f3 = new Filter3(); f.Sucessor = f2; f2.Sucessor = f3; /* and so on adding filters as successors to previous ones */ foreach (User u in users) { foreach (GeoRegion r in regions) { f.Process(u, r, ##); } } How should I go about it?

    Read the article

  • Using lambda expressions and linq

    - by Andy
    So I've just started working with linq as well as using lambda expressions. I've run into a small hiccup while trying to get some data that I want. This method should return a list of all projects that are open or in progress from Jira Here's the code public static List<string> getOpenIssuesListByProject(string _projectName) { JiraSoapServiceService jiraSoapService = new JiraSoapServiceService(); string token = jiraSoapService.login(DEFAULT_UN, DEFAULT_PW); string[] keys = { getProjectKey(_projectName) }; RemoteStatus[] statuses = jiraSoapService.getStatuses(token); var desiredStatuses = statuses.Where(x => x.name == "Open" || x.name == "In Progress") .Select(x=>x.id); RemoteIssue[] AllIssues = jiraSoapService.getIssuesFromTextSearchWithProject(token, keys, "", 99); IEnumerable<RemoteIssue> openIssues = AllIssues.Where(x=> { foreach (var v in desiredStatuses) { if (x.status == v) return true; else return false; } return false; }); return openIssues.Select(x => x.key).ToList(); } Right now this only select issues that are "Open", and seems to skip those that are "In Progress". My question: First, why am I only getting the "Open" Issues, and second is there a better way to do this? The reason I get all the statuses first is that the issue only stores that statuses ID, so I get all the statuses, get the ID's that match "Open" and "In Progress", and then match those ID numbers to the issues status field.

    Read the article

  • DDD and MVC Models hold ID of separate entity or the entity itself?

    - by Dr. Zim
    If you have an Order that references a customer, does the model include the ID of the customer or a copy of the customer object like a value object (thinking DDD)? I would like to do ths: public class Order { public int ID {get;set;} public Customer customer {get;set;} ... } right now I do this: public class Order { public int ID {get;set;} public int customerID {get;set;} ... } It would be more convenient to include the complete customer object rather than an ID to the View Model passed to the form. Otherwise, I need to figure out how to get the vendor information to the view that the order references by ID. This also implies that the repository understand how to deal with the customer object that it finds within the order object when they call save (if we select the first option). If we select the second option, we will need to know where in the view model to put it. It is certain they will select an existing customer. However, it is also certain they may want to change the information in-place on the display form. One could argue to have the controller extract the customer object, submit customer changes separately to the repository, then submit changes to the order, keeping the customerID in the order.

    Read the article

  • Why am I returning empty records when querying in mysql with php?

    - by Brian Bolton
    I created the following script to query a table and return the first 30 results. The query returns 30 results, but they do not have any text or information. Why would this be? The table stores Vietnamese characters. The database is mysql4. Here's the page: http://saomaidanang.com/recentposts.php Here's the code: <?php header( 'Content-Type: text/html; charset=utf-8' ); //CONNECTION INFO $dbms = 'mysql'; $dbhost = 'xxxxx'; $dbname = 'xxxxxxx'; $dbuser = 'xxxxxxx'; $dbpasswd = 'xxxxxxxxxxxx'; $conn = mysql_connect($dbhost, $dbuser, $dbpasswd ) or die('Error connecting to mysql'); mysql_select_db($dbname , $conn); //QUERY $result = mysql_query("SET NAMES utf8"); $cmd = 'SELECT * FROM `phpbb_posts_text` ORDER BY `phpbb_posts_text`.`post_subject` DESC LIMIT 0, 30 '; $result = mysql_query($cmd); ?> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html dir="ltr"> <head> <title>recent posts</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> </head> <body> <p> <?php //DISPLAY while ($myrow = mysql_fetch_row($result)) { echo 'post subject:'; echo(utf8_encode($myrow ['post_subject'])); echo 'post text:'; echo(utf8_encode($myrow ['post_text'])); } ?> </p> </body>

    Read the article

  • Index an array expression directly in PostgreSQL

    - by wich
    I'm trying to insert data into a table from a template table. I need to rewrite one of the columns for which I wanted to use a directly indexed array expression, but I can't seem to find how to do this, if it is even possible. The scenario: create table template ( id integer, index integer, foo integer); insert into template values (0, 1, 23), (0, 2, 18), (0, 3, 16), (0, 4, 7), (1, 1, 17), (1, 2, 26), (1, 3, 11), (1, 4, 3); create table data ( data_id integer, foo integer); Now what I'd like to do is the following: insert into data select (array[3,7,5,2])[index], foo from template where id = 1; But this doesn't work, the (array[3,7,5,2])[index] syntax isn't valid. I tried a few variants, but was unable to get anything working and wasn't able to find the correct syntax in the docs, nor even whether this is at all possible or not. As a current workaround I've devised the following, but it is less than ideal, from an elegance perspective at least, but it may also be a performance hit, I haven't looked into that yet. insert into data select arr[index], foo from template, (select array[3,7,5,2] as arr) as q where id = 1; If anyone could suggest a (better) alternative to accomplish this I'd like to hear that as well.

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • Ways of breaking down SQL transactional/call data into reports -- 'square data'?

    - by RizwanK
    I've got a large database of call-traffic information (although the question could be answered with any generic data set.) For instance, a row contains : call endpoint server (endpoint_name) call endpoint status (sip_disconnect_reason) call destination (destination) call completed (duration) [duration 0 is completed] call account group (account_group) It's pretty easy to run SQL reports against the data, i.e. select count(*), endpoint_name from calls where duration0 group by endpoint_name select count(*),destination from calls where blah group by destination I've been calling this filtering or breakdown reports (I get the number of calls per carrier, etc.). Add another breakdown, and you've got two breakdowns, a la select count(*), endpoint_name, sip_disconnect_reason from calls where duration=0 group by endpoint_name, sip_disconnect_reason Of course, if you keep adding breakdowns, you end up making super-large reports and slicing your data so thin that you can't extract any trends from it. So my question is this : Is there a name for this sort of method of report writing? (I've heard words like squares, slicing and breakdown reports applied to them) --- I'm looking for a Python/Reporting toolkit that I can use to make these easier to generate for my end users. aside : Are there other ways of representing transactional data that might be useful rather than the above method? Thanks,

    Read the article

  • confusion about using types instead of gtts in oracle

    - by Omnipresent
    I am trying to convert queries like below to types so that I won't have to use GTT: insert into my_gtt_table_1 (house, lname, fname, MI, fullname, dob) (select house, lname, fname, MI, fullname, dob from (select 'REG' house, mbr_last_name lname, mbr_first_name fname, mbr_mi MI, mbr_first_name || mbr_mi || mbr_last_name fullname, mbr_dob dob from table_1 a, table_b where a.head = b.head and mbr_number = '01' and mbr_last_name = v_last_name) c above is just a sample but complex queries are bigger than this. the above is inside a stored procedure. So to avoid the gtt (my_gtt_table_1). I did the following: create or replace type lname_row as object ( house varchar2(30) lname varchar2(30), fname varchar2(30), MI char(1), fullname VARCHAR2(63), dob DATE ) create or replace type lname_exact as table of lname_row Now in the SP: type lname_exact is table of <what_table_should_i_put_here>%rowtype; tab_a_recs lname_exact; In the above I am not sure what table to put as my query has nested subqueries. query in the SP: (I am trying this for sample purpose to see if it works) select lname_row('', '', '', '', '', '', sysdate) bulk collect into tab_a_recs from table_1; I am getting errors like : ORA-00913: too many values I am really confused and stuck with this :(

    Read the article

< Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >