Search Results

Search found 17593 results on 704 pages for 'wmi query'.

Page 659/704 | < Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >

  • How do I do a table join on two fields in my second table?

    - by Cannonade
    I have two tables: Messages - Amongst other things, has a to_id and a from_id field. People - Has a corresponding person_id I am trying to figure out how to do the following in a single linq query: Give me all messages that have been sent to and from person x (idself). I had a couple of cracks at this. Not quite right MsgPeople = (from p in db.people join m in db.messages on p.person_id equals m.from_id where (m.from_id == idself || m.to_id == idself) orderby p.name descending select p).Distinct(); This almost works, except I think it misses one case: "people who have never received a message, just sent one to me" How this works in my head So what I really need is something like: join m in db.messages on (p.people_id equals m.from_id or p.people_id equals m.to_id) Gets me a subset of the people I am after It seems you can't do that. I have tried a few other options, like doing two joins: MsgPeople = (from p in db.people join m in AllMessages on p.person_id equals m.from_id join m2 in AllMessages on p.person_id equals m2.to_id where (m2.from_id == idself || m.to_id == idself) orderby p.name descending select p).Distinct(); but this gives me a subset of the results I need, I guess something to do with the order the joins are resolved. My understanding of LINQ (and perhaps even database theory) is embarrassingly superficial and I look forward to having some light shed on my problem.

    Read the article

  • How to structure code with 2 methods, one after another, which throw the same two exceptions?

    - by dotnetdev
    Hi, I have two methods, one called straight after another, which both throw the exact same 2 exceptions (IF an erroneous condition occurs, not stating that I'm getting exceptions). For this, should I write seperate try and catch blocks with the one statement in each try block and catch both exceptions (Both of which I can handle as I checked MSDN class library reference and there is something I can do, eg, re-open SqlConnection or run a query and not a stored proc which does not exist). So code like this: try { obj.Open(); } catch (SqlException) { // Take action here. } catch (InvalidOperationException) { // Take action here. } And likewise for the other method I call straight after. This seems like a very messy way of coding. The other way is to code with the exception variable (that is ommited as I am using AOP to log the exception details, using a class-level attribute). Doing this, this could aid me in finding out which method caused an exception and then taking action accordingly. Is this the best approach or is there another best practise altogether? I also assume that, as only these two methods are thrown, I do not need to catch Exception as that would be for an exception I cannot handle (causes way out of my control). Thanks

    Read the article

  • Facebook / Offline Permission - Trying to perform an action on a set of offline users.

    - by blueigloo
    Hi there, We're building an app which in part of its functionality tries to capture the number of likes associated to a particular video owned by a user. Users of the app are asked for extended off-line access and we capture the key for each user: The format is like this: 2.hg2QQuYeftuHx1R84J1oGg__.XXXX.1272394800-nnnnnn Each user has their offline / infinite key stored in a table in a DB. The object_id which we're interested in is also stored in the DB. At a later stage (offline) we try to run a batch job which reads the number of likes for each user's video. (See attached code) For some reason however, after the first iteration of the loop - which yields the likes correctly, we get a failure with the oh so familiar message: "Session key is invalid or no longer valid" Any insight would be most appreciated. Thanks, B List<DVideo> videoList = db.SelectVideos(); foreach (DVideo video in videoList) { long userId = 0; ConnectSession fbSession = new ConnectSession(APPLICATION_KEY, SECRET_KEY); //session key is attached to the video object for now. fbSession.SessionKey = video.UserSessionKey; fbSession.SessionExpires = false; string fbuid =video.FBUID; long.TryParse(fbuid, out userId); if (userId > 0) { fbSession.UserId = userId; fbSession.Login(); Api fbApi = new Facebook.Rest.Api(fbSession); string xmlQueryResult = fbApi.Fql.Query("SELECT user_id FROM like WHERE object_id = " + video.FBVID); XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(new StringReader(xmlQueryResult)); int likesCount = xmlDoc.GetElementsByTagName("user_id").Count; //Write entry in VideoWallLikes if (likesCount > 0) { db.CountWallLikes(video.ID, likesCount); } fbSession.Logout(); } fbSession = null; }

    Read the article

  • PHP - advice for java HashMap alternative in php?

    - by teutara
    I know it is super noob and will be answered in no time, but I could not figure.. sorry for any inconvenience.. Here is the thing: ID colA colB Length 1 seq1 seq11 1 2 seq1 seq11 11 3 seq3 seq33 21 4 seq3 seq33 14 I have a db with this kind of a table, has more than 10M rows. I want to loop though colA first, get the relevant colB value, and check if there are any other occurrences of the same value. For example in colB (seq11) there are 2 occurrences of colA(seq1), this time I have to combine those and output the sum of the length. Similar to this: ID colA colB Length 1 seq1 seq11 12 2 seq3 seq33 35 I am a bit java guy, but because my colleague has written everything in php and this will be just an adding, i need a php solution. With java i would have used hashmap, so that I would have the colA data once and just increment the value of "Length Column".. I know it is not a proper question, but.. Thank you in advance.. $$$$$$$$$$ EDIT $$$$$$$$$$ I tried this query in order to group by occurences: SELECT COUNT(*) SeqName FROM SeqTable GROUP BY SeqName HAVING COUNT(*)>0;

    Read the article

  • Best way to not update empty posts

    - by user1533106
    Hello, Im using codeigniter, and the page in case just update infos about an user. If the user go to the page and edit values and that posts come as "" or empty (samething) then no update it let the query pass it, i got a logic but its a bit ugly and ill take alot of time. $nome = "'nome' =>" . $this->input->post('nome') . "'"; $sobrenome = "'sobrenome' =>" . $this->input->post('sobrenome') . "'"; if($nome != ""){ $nome = "'nome' =>" . $this->input->post('nome') . "'"; }else{ $nome = ""; } if($sobrenome != ""){ $sobrenome = "'sobrenome' =>" . $this->input->post('sobrenome') . "'"; }else{ $sobrenome = ""; } $data = array($nome, $sobrenome); The problem is, i got alot of fields :( If anyone know a smart way or a better way, please i want know

    Read the article

  • How to update MySQL table with a cumulative count of events with the same date

    - by John
    I have a table which I pull data from. This data has a date and a stock symbol. There can be multiple rows of data with the same date and different stock symbols. I need to update the table so that there is a running total for all rows with the same date. When a new date occurs the cumulative count re-sets to 1 and then resumes. A typical query: mysql> SELECT Sym 'sym' , fdate as 'FilledDate', TsTradeNum 'numT' -> FROM trades_entered_b2 -> WHERE fDate >= '2009-08-03' AND fDate <= '2009-08-07' -> LIMIT 10; +------+------------+------+ | sym | FilledDate | numT | +------+------------+------+ | WAT | 2009-08-03 | 0 | | ALGN | 2009-08-04 | 0 | | POT | 2009-08-05 | 0 | | PTR | 2009-08-06 | 0 | | SCHW | 2009-08-06 | 0 | | FDO | 2009-08-07 | 0 | | NBL | 2009-08-07 | 0 | | RRC | 2009-08-07 | 0 | | WAT | 2009-08-08 | 0 | | COCO | 2009-08-08 | 0 | +------+------------+------+ What I want: +------+------------+------+ | sym | FilledDate | numT | +------+------------+------+ | WAT | 2009-08-03 | 1 | | ALGN | 2009-08-04 | 1 | | POT | 2009-08-05 | 1 | | PTR | 2009-08-06 | 1 | | SCHW | 2009-08-06 | 2 | | FDO | 2009-08-07 | 3 | | NBL | 2009-08-07 | 4 | | RRC | 2009-08-07 | 5 | | WAT | 2009-08-08 | 1 | | COCO | 2009-08-08 | 2 | +------+------------+------+ What I need to do is update the TsTradeNum column with the correct values. I have tried to create a function and various queries all failed. Any ideas? Thanks in advance

    Read the article

  • How to add "missing" columns in a column group in reporting services?

    - by Gimly
    Hello, I'm trying to create a report that displays for each months of the year the quantity of goods sold. I have a query that returns the list of goods sold for each month, it looks something like this : SELECT Seller.FirstName, Seller.LastName, SellingHistory.Month, SUM(SellingHistory.QuantitySold) FROM SellingHistory JOIN Seller on SellingHistory.SellerId = Seller.SellerId WHERE SellingHistory.Year = @Year GOUP BY Seller.FirstName, Seller.LastName, SellingHistory.Month What I want to do is display a report that has a column for each months + a total column that will display for each Seller the quantity sold in the selected month. Seller Name | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | Total What I managed to do is using a matrix and a column group (group on Month) to display the columns for existing data, if I have data from January to March, it will display the 3 first columns and the total. What I would like to do is always display all the columns. I thought about making that by adding the missing months in the SQL request, but I find that a bit weird and I'm sure there must be some "cleanest" solution as this is something that must be quite frequent. Thanks. PS: I'm using SQL Server Express 2008

    Read the article

  • PHP & MySQL username validation and storage problem.

    - by php
    For some reason when a user enters a brand new username the error message <p>Username unavailable</p> is displayed and the name is not stored. I was wondering if some can help find the flaw in my code so I can fix this error? Thanks Here is the PHP code. if($_POST['username'] && trim($_POST['username'])!=='') { $u = "SELECT * FROM users WHERE username = '$username' AND user_id <> '$user_id'"; $r = mysqli_query ($mysqli, $u) or trigger_error("Query: $u\n<br />MySQL Error: " . mysqli_error($mysqli)); if (mysqli_num_rows($r) == TRUE) { echo '<p>Username unavailable</p>'; $_POST['username'] = NULL; } else if(isset($_POST['username']) && mysqli_num_rows($r) == 0 && strlen($_POST['username']) <= 255) { $username = mysqli_real_escape_string($mysqli, $_POST['username']); } else if($_POST['username'] && strlen($_POST['username']) >= 256) { echo '<p>Username can not exceed 255 characters</p>'; } }

    Read the article

  • Database solution for 200million writes/day, monthly summarization queries

    - by sb
    Hello. I'm looking for help deciding on which database system to use. (I've been googling and reading for the past few hours; it now seems worthwhile to ask for help from someone with firsthand knowledge.) I need to log around 200 million rows (or more) per 8 hour workday to a database, then perform weekly/monthly/yearly summary queries on that data. The summary queries would be for collecting data for things like billing statements, eg. "How many transactions of type A did each user run this month?" (could be more complex, but that's the general idea). I can spread the database amongst several machines, as necessary, but I don't think I can take old data offline. I'll definitely need to be able to query a month's worth of data, maybe a year. These queries would be for my own use, and wouldn't need to be generated in real-time for an end-user (they could run overnight, if needed). Does anyone have any suggestions as to which databases would be a good fit? P.S. Cassandra looks like it would have no problem handling the writes, but what about the huge monthly table scans? Is anyone familiar with Cassandra/Hadoop MapReduce performance?

    Read the article

  • Display web page from another site in asp page.

    - by Daniel
    hi all, Our customer has a requirement to extend the functionality of their existing large government project. It is an ASP.NET 3.5 (recently upgraded from 2.0) project. The existing solution is quite a behemoth that is almost unmaintainable so they have decided that they want to provide the new functionality by hosting it on another website that is shown within the existing website. As to how this is best to be done I'm not quite sure right now and if there is any security issues preventing it or that need to be considered. Essentially the user would log on to the existing web site as normal and when cliicking on a certain link the page would load as normal with some kind of frame or control that has within it the contents of the page from the other site. IE. They do not want to simply redirect to the other site they want to show it embedded within the current one such that the existing menus etc are still available. I believe if information needed to be passed to the embedded page it would be done using query strings as I'm not sure if there is even another way to accomplish this. Can anyone give me some pointers on where to start at looking to implement this or any potential pitfalls I should be aware of. Thanks

    Read the article

  • Best ways to format LINQ queries.

    - by Aren B
    Before you ignore / vote-to-close this question, I consider this a valid question to ask because code clarity is an important topic of discussion, it's essential to writing maintainable code and I would greatly appreciate answers from those who have come across this before. I've recently run into this problem, LINQ queries can get pretty nasty real quick because of the large amount of nesting. Below are some examples of the differences in formatting that I've come up with (for the same relatively non-complex query) No Formatting var allInventory = system.InventorySources.Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }).GroupBy(i => i.Region, i => i.Inventory); Elevated Formatting var allInventory = system.InventorySources .Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy( i => i.Region, i => i.Inventory); Block Formatting var allInventory = system.InventorySources .Select( src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy( i => i.Region, i => i.Inventory ); List Formatting var allInventory = system.InventorySources .Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy(i => i.Region, i => i.Inventory); I want to come up with a standard for linq formatting so that it maximizes readability & understanding and looks clean and professional. So far I can't decide so I turn the question to the professionals here.

    Read the article

  • Optimize INSERT / UPDATE / DELETE operation

    - by clime
    I wonder if the following script can be optimized somehow. It does write a lot to disk because it deletes possibly up-to-date rows and reinserts them. I was thinking about applying something like "insert ... on duplicate key update" and found some possibilities for single-row updates but I don't know how to apply it in the context of INSERT INTO ... SELECT query. CREATE OR REPLACE FUNCTION update_member_search_index() RETURNS VOID AS $$ DECLARE member_content_type_id INTEGER; BEGIN member_content_type_id := (SELECT id FROM django_content_type WHERE app_label='web' AND model='member'); DELETE FROM watson_searchentry WHERE content_type_id = member_content_type_id; INSERT INTO watson_searchentry (engine_slug, content_type_id, object_id, object_id_int, title, description, content, url, meta_encoded) SELECT 'default', member_content_type_id, web_member.id, web_member.id, web_member.name, '', web_user.email||' '||web_member.normalized_name||' '||web_country.name, '', '{}' FROM web_member INNER JOIN web_user ON (web_member.user_id = web_user.id) INNER JOIN web_country ON (web_member.country_id = web_country.id) WHERE web_user.is_active=TRUE; END; $$ LANGUAGE plpgsql; EDIT: Schemas of web_member, watson_searchentry, web_user, web_country: http://pastebin.com/3tRVPPVi. (content_type_id, object_id_int) in watson_searchentry is unique pair in the table but atm the index is not present (there is no use for it). This script should be run at most once a day for full rebuilds of search index.

    Read the article

  • To NOLOCK or NOT to NOLOCK, that is the question

    - by Limey
    Hi all, This is really more of a discussion than a specific question about nolock. I took over an app recently that almost every query (and there are lots of them) has the nolock option on them. Now I am pretty new to SQL server (used Oracle for 10 years) but yet I find this pretty disturbing. So this weekend I was talking with one of my friends who runs a rather large ecommerce site (name will be withheld to protect the guilty) and he says he has to do this with all of his SQL servers cause he will always end in deadlocks. Is this just a huge short fall with SQL server? Is this just a failure in the DB design (mine is not 3rd level, but its close) Is anybody out there running an SQL server app without nolocks? These are issues that Oracle handles better with more grandulare recordlocks. Is SQL server just not able to handle big loads? Is there some better workaround than reading uncommited data? I would love to hear what people think. Thanks

    Read the article

  • [iphone] method created in a seperate class returns "out of scope"

    - by Dror Sabbag
    Hey, I have created a Class (subclass of NSObject) which will hold all my SQLs/dbConnections etc.. in a seperate viewcontroller, i have instantiated the SQL's class and performed some actions, all went trough OK. but. one of my methods in the SQL's class is a method defined as follows: -(NSString *)queryTable:(NSUInteger *)fieldnum //query from db, and assign the field value into "fieldName" dbEntity = fieldName; [fieldName release]; } sqlite3_finalize(statement); } return dbEntity; } dbEntity is defined as NSString, and i have set it as a nonatoimc-retain property @property (nonatomic,retain) NSString *dbEntity; when ever i call this method out from my viewController and debug step by step, i see that the method is running, it is quering from the db as expected, but when it passes the value into dbEntity the values in dbEntity are suddenly "out of scope" that is... if i browse this specific action: dbEntity = fieldName; i can see values inside fieldName, but see "out of scope" in dbEntity. Why is that?!? what is wrong with dbEntity definitions? Any help will be appriciated.

    Read the article

  • Django: getting the list of related records for a list of objects

    - by Silver Light
    Hello! I have two models related one-to many: class Person(models.Model): name = models.CharField(max_length=255); surname = models.CharField(max_length=255); age = models.IntegerField(); class Dog(models.Model): name = models.CharField(max_length=255); owner = models.ForeignKey('Person'); I want to output a list of persons below each person a list of dogs he has. Here's how I can do it: in view: persons = Person.objects.all()[0:100]; in template: {% for p in persons %} {{ p.name }} has dogs:<br /> {% for d in persons.dog_set.all %} - {{ d.name }}<br /> {% endfor %} {% endfor %} But if I do it like that, Django will execute 101 SQL queries which is very inefficient. I tried to make a custom manager, which will get all the persons, then all the dogs and links them in python, but then I can't use paginator (my another question: http://stackoverflow.com/questions/2532475/django-paginator-raw-sql-query ) and it looks quite ugly. Is there a more graceful way doing this?

    Read the article

  • A very interesting MYSQL problem (related to indexing, million records, algorithm.)

    - by terence410
    This problem is pretty hard to describe and therefore difficult to search the answer. I hope some expert could share you opinions on that. I have a table with around 1 million of records. The table structure is similar to something like this: items{ uid (primary key, bigint, 15) updated (indexed, int, 11) enabled (indexed, tinyint, 1) } The scenario is like this. I have to select all of the records everyday and do some processing. It takes around 3 second to handle each item. I have written a PHP script to fetch 200 items each time using the following. select * from items where updated unix_timestamp(now()) - 86400 and enabled = 1 limit 200; I will then update the "updated" field of the selected items to make sure that it wont' be selected again within one day. The selected query is something like that. update items set updated = unix_timestamp(now()) where uid in (1,2,3,4,...); Then, the PHP will continue to run and process the data which doesn't require any MYSQL connection anymore. Since I have million records and each record take 3 seconds to process, it's definitely impossible to do it sequentially. Therefore, I will execute the PHP in every 10 seconds. However, as time goes by and the table growth, the select getting much slower. Sometimes, it take more than 100 seconds to run! Do you guys have any suggestion how may I solve this problem?

    Read the article

  • mysql - multiple where and search

    - by Shamil
    I'm trying to write a SQL query that satisfies multiple criteria. Of these, most are connected via a column, so joins are possible, however, some queries are such that I'd have to search additional tables for the information. What would be the least expensive and best way to do this? Let's say that we have a few tables. One table contains information such as sales information for a server: the salesperson, client id, service lease term, timestamps etc. It is possible that a client has multiple sales but with a different "service". I'd need to pick up all of the different ones. Another table has the quotes for the services, I'd need to pick some information out about this, whilst another, which could be joined to this one has some more information. Those tables are linked by a common client ID, so joins are possible, but I'd also need to search the first table for multiple instances of the client ID. Of course, I'd want to restrict the search to certain timestamps, which I can easily do as the timestamps are stored in MySQL format.

    Read the article

  • Why doesn't Firefox redownload images already on a page?

    - by vvo
    Hello, i just read this article : https://developer.mozilla.org/en/HTTP_Caching_FAQ There's a firefox behavior (and some other browsers i guess) i'd like to understand : if i take any webpage and try to insert the same image multiple times in javascript, the image is only downloaded ONCE even if i specifiy all needed headers to say "do no ever use cache". (see article) I know there are workarounds (like addind query strings to end of urls etc) but why do firefox act like that, if i say that an image do not have to be cached, why is the image still taken from cache when i try to re-insert it ? Plus, what cache is used for this ? (I guess it's the memory cache) Is this behavior the same for dynamic inclusion for example ? ANSWSER IS NO :) I just tested it and the same headers for a js script will make firefox redownload it each time you append the script to the DOM. PS: I know you're wondering WHY i need to do that (appending same image multiple times and force to redownload but this is the way our app works) thank you The good answer is : firefox will store images for the current page load in the memory cache even if you specify he doesnt have to cache them. You can't change this behavior but this is odd because it's not the same for javascript files for example Could someone explain or link to a document describing how firefox cache WORKS?

    Read the article

  • how to transfer a time which was zero at year of 0000(maybe) to java.util.Date

    - by hguser
    I have a gps time in the database,and when I do some query,I have to use the java.util.Date,however I found that I do not know how to change the gps time to java.util.Date. Here is a example: The readable time === The GPS time 2010-11-15 13:10:00 === 634254192000000000 2010-11-15 14:10:00 === 634254228000000000 The period of the two date is "36000000000",,obviously it stands for one hour,so I think the unit of the gps time in the db must be nanosecond. 1 hour =3600 seconds= 3600*1000 milliseconds == 3600*1000*10000 nanoseconds Then I try to convert the gps time: Take the " 634254228000000000" as example,it stands for("2010-11-15 14:10:00"); SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ssZ"); Date d = new Date(63425422800000L); System.out.println(sdf.format(d)); The result is 3979-11-15 13:00:00+0000. Of course it is wrong,then I try to calculate : 63425422800000/3600000/24/365=2011.xxx So it seems that the gps time here is not calcuated from Epoch(1970-01-01 00:00:00+0000). It maybe something like (0001-01-01 00:00:00+0000). Then I try to use the following method: Date date_0=sdf.parse("0001-01-01 00:00:00+0000"); Date d = new Date(63425422800000L); System.out.println(sdf.format(d.getTime() + date_0.getTime())); The result is: 2010-11-13 13:00:00+0000. :( Now I am confusing about how to calculate this gps time. Any suggestion?

    Read the article

  • Worpress WorkFlow Modfications

    - by blgnklc
    Hi All WordPress Lovers, I would like to ask a help about Zensor which is a plugin that you publish a post then a moderator approves the post to be published on the wordpress blog site. When a post is awating for approval, each awaiting post is appearing "waiting moderation". But, I dont want any link appears before moderator approval. Actually I found the joing sentence below; 1- Must be added to the end of JOIN part of any query: LEFT JOIN wp_zensor ON ID = wp_zensor.post_id 2- Must be added to the end of WHERE condition : AND wp_zensor.moderation_status = 'approved' Could you please show me; where should I add these modification on the category link presentation below: <h2>Politics</h2> <?php $recent = new WP_Query("cat=31&showposts=1"); while($recent->have_posts()) : $recent->the_post();?> <b><a href="<?php the_permalink() ?>" rel="bookmark"><?php the_title(); ?></a></b> <?php the_content_limit(140, "devami &raquo;"); ?> <div class="hppostmeta"> <p><?php the_time('j F Y, H:i'); ?> | <?php the_author_posts_link(); ?></p> </div> <?php endwhile; ?> Or any general solutions will be welcomed. Thanks. BK

    Read the article

  • Creation of model in core data on the fly

    - by user1740045
    How can we create a model in core data on the fly? I.e getting the schema of database from somewhere and then creating a Core Data Object graph? *QuesTion:* Yes thats fine, agreed with all the advantages. But, can anybody can tell practically, what is the benefit of integrating Core Data into project instead of using SQL directly. 1.No need to write SQL boiler plate code [but need to learn Core Data Model (steep curve)] 2.WE can undo and redo changes [but practically who needs it] 3.we can migrate to another schema [that can be done by SQLite as well jus need to add another field into table] 4.For say aggregation on some field in table,in Core Data we need to loop through Core Data Objects whereas in SQLite we need to first write SQLite Boiler Plate Code and then the basic aggregation SQL query,which is easy to write,only length of code will increase...But in case of Core Data (need to learn a lot). So apart from reducing the length of Code,does it actually adds value to project? or in terms of Memory Efficiency,Performance,etc.. PS: If anybody has actualy worked on Core Data(Model Creation On the Fly) , if possible share and gve pointers..thanks!

    Read the article

  • MS Access Form - Horizontal Anchor Affecting Data Update

    - by nicholas
    Running Access 2007 with a databound form. The form Record Source is set to a query, and all fields in the form have a defined Control Source; nothing fancy, just field names. The form is a Single form with record navigation buttons which perform a "Next Record" and "Previous Record" actions. As I navigate the records the controls in the header update correctly. However, if I change a control Horizontal Anchor property to "Right" the fields no longer update on record navigation. This is observed for both text box and combo box controls. I can switch the anchoring back to "Left" and the updating works as it should. Is there some reason anchoring would affect a control updating of in an Access form? Or is this a bug that has been observed before? The only workaround I can think of is to assign the control text/value property in the form OnCurrent event, but this seems somewhat sloppy. Am I missing something here?

    Read the article

  • IEnumerator seems to be effecting all objects, and not one at a time

    - by PFranchise
    Hey, I am trying to alter an attribute of an object. I am setting it to the value of that same attribute stored on another table. There is a one to many relationship between the two. The product end is the one and the versions is the many. Right now, both these methods that I have tried have set all the products returned equal to the final version object. So, in this case they are all the same. I am not sure where the issue lies. Here are my two code snipets, both yield the same result. int x = 1 IEnumerator<Product> ie = productQuery.GetEnumerator(); while (ie.MoveNext()) { ie.Current.RSTATE = ie.Current.Versions.First(o => o.VersionNumber == x).RSTATE; x++; } and foreach (var product in productQuery) { product.RSTATE = product.Versions.Single(o => o.VersionNumber == x).RSTATE; x++; } The versions table holds information for previous products, each is distinguished by the version number. I know that it will start at 1 and go until it reaches the current version, based on my query returning the proper number of products. Thanks for any advice.

    Read the article

  • Distinct() to return List<> returning Duplicates

    - by KDM
    I have a list of Filters that are passed into a webservice and I iterate over the collection and do Linq query and then add to the list of products but when I do a GroupBy and Distinct() it doesn't remove the duplicates. I am using a IEnumerable because when you use Disinct it converts it to IEnumerable. If you know how to construct this better and make my function return a type of List<Product> that would be appreciated thanks. Here is my code in C#: if (Tab == "All-Items") { List<Product> temp = new List<Product>(); List<Product> Products2 = new List<Product>(); foreach (Filter filter in Filters) { List<Product> products = (from p in db.Products where p.Discontinued == false && p.DepartmentId == qDepartment.Id join f in db.Filters on p.Id equals f.ProductId join x in db.ProductImages on p.Id equals x.ProductId where x.Dimension == "180X180" && f.Name == filter.Name /*Filter*/ select new Product { Id = p.Id, Title = p.Title, ShortDescription = p.ShortDescription, Brand = p.Brand, Model = p.Model, Image = x.Path, FriendlyUrl = p.FriendlyUrl, SellPrice = p.SellPrice, DiscountPercentage = p.DiscountPercentage, Votes = p.Votes, TotalRating = p.TotalRating }).ToList<Product>(); foreach (Product p in products) { temp.Add(p); } IEnumerable temp2 = temp.GroupBy(x => x.Id).Distinct(); IEnumerator e = temp.GetEnumerator(); while (e.MoveNext()) { Product c = e.Current as Product; Products2.Add(c); } } pf.Products = Products2;// return type must be List<Product> }

    Read the article

  • SQL Server Connection Timeout C#

    - by Termin8tor
    First off I'd like to let everyone know I have searched my particular problem and can't seem to find what's causing my problem. I have an SQL Server 2008 instance running on a network machine and a client I have written connecting to it. To connect I have a small segment of code that establishes a connection to an sql server 2008 instance and returns a DataTable populated with the results of whatever query I run against the server, all pretty standard stuff really. Anyway the issue is, whenever I open my program and call this method, upon the first call to my method, regardless as to what I've set my Connection Timeout value as in the connection string, it takes about 15 seconds and then times out. Bizarrely though the second or third call I make to the method will work without a problem. I have opened up the ports for SQL Server on the server machine as outlined in this article: How to Open firewall ports for SQL Server and verified that it is correctly configured. Can anyone see a particular problem in my code? string _connectionString = "Server=" + @Properties.Settings.Default.sqlServer + "; Initial Catalog=" + @Properties.Settings.Default.sqlInitialCatalog + ";User Id=" + @Properties.Settings.Default.sqlUsername + ";Password=" + @Properties.Settings.Default.sqlPassword + "; Connection Timeout=1"; private DataTable ExecuteSqlStatement(string command) { using (SqlConnection conn = new SqlConnection(_connectionString)) { try { conn.Open(); using (SqlDataAdapter adaptor = new SqlDataAdapter(command, conn)) { DataTable table = new DataTable(); adaptor.Fill(table); return table; } } catch (SqlException e) { throw e; } } } The SqlException that is caught at my catch is : "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." This occurs at the conn.Open(); line in the code snippet I have included. If anyone has any ideas that'd be great!

    Read the article

< Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >