Search Results

Search found 25005 results on 1001 pages for 'sequential number'.

Page 333/1001 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • problem with binarysearch algorithm

    - by arash
    hi friends,the code below belongs to binary search algorithm,user enter numbers in textbox1 and enter the number that he want to fing with binarysearch in textbox2.i have a problem with it,that is when i enter for example 15,21 in textbox1 and enter 15 in textbox2 and put brakpoint on the line i commented below,and i understood that it doesnt put the number in textbox2 in searchnums(commented),for more explanation i comment in code.thanks in advance public void button1_Click(object sender, EventArgs e) { int searchnums = Convert.ToInt32(textBox2.Text);//the problem is here,the value in textbox2 doesnt exist in searchnums and it has value 0. int result = binarysearch(searchnums); MessageBox.Show(result.ToString()); } public int binarysearch(int searchnum) { string[] source = textBox1.Text.Split(','); int[] nums = new int[source.Length]; for (int i = 0; i < source.Length; i++) { nums[i] = Convert.ToInt32(source[i]); } int first =0; int last = nums.Length; int mid = (int)Math.Floor(nums.Length / 2.0); while (1<= nums.Length) { if (searchnum < nums[mid]) { last = mid - 1; } if (searchnum > nums[mid]) { first = mid + 1; } else { return nums[mid]; } } return -1; }

    Read the article

  • Simple encryption - Sum of Hashes in C

    - by Dogbert
    I am attempting to demonstrate a simple proof of concept with respect to a vulnerability in a piece of code in a game written in C. Let's say that we want to validate a character login. The login is handled by the user choosing n items, (let's just assume n=5 for now) from a graphical menu. The items are all medieval themed: eg: _______________________________ | | | | | Bow | Sword | Staff | |-----------|-----------|-------| | Shield | Potion | Gold | |___________|___________|_______| The user must click on each item, then choose a number for each item. The validation algorithm then does the following: Determines which items were selected Drops each string to lowercase (ie: Bow becomes bow, etc) Calculates a simple string hash for each string (ie: `bow = b=2, o=15, w=23, sum = (2+15+23=40) Multiplies the hash by the value the user selected for the corresponding item; This new value is called the key Sums together the keys for each of the selected items; this is the final validation hash IMPORTANT: The validator will accept this hash, along with non-zero multiples of it (ie: if the final hash equals 1111, then 2222, 3333, 8888, etc are also valid). So, for example, let's say I select: Bow (1) Sword (2) Staff (10) Shield (1) Potion (6) The algorithm drops each of these strings to lowercase, calculates their string hashes, multiplies that hash by the number selected for each string, then sums these keys together. eg: Final_Validation_Hash = 1*HASH(Bow) + 2*HASH(Sword) + 10*HASH(Staff) + 1*HASH(Shield) + 6*HASH(Potion) By application of Euler's Method, I plan to demonstrate that these hashes are not unique, and want to devise a simple application to prove it. in my case, for 5 items, I would essentially be trying to calculate: (B)(y) = (A_1)(x_1) + (A_2)(x_2) + (A_3)(x_3) + (A_4)(x_4) + (A_5)(x_5) Where: B is arbitrary A_j are the selected coefficients/values for each string/category x_j are the hash values for each string/category y is the final validation hash (eg: 1111 above) B,y,A_j,x_j are all discrete-valued, positive, and non-zero (ie: natural numbers) Can someone either assist me in solving this problem or point me to a similar example (ie: code, worked out equations, etc)? I just need to solve the final step (ie: (B)(Y) = ...). Thank you all in advance.

    Read the article

  • Newsletter send using AJAX to avoid PHP timeout

    - by simPod
    I need to send newsletters. I have already a PHP script that sends mass emails but it won't work for long as email database is growing because of PHP max script run time. So, to avoid it I came up with a solution: I would call my PHP script using AJAX in javascript and I will give it $_GET parameter with a count 20 so the script would sent only 20 emails. Than AJAX would receive success response, and call my script again and again till all emails are send. Is it possible? I'm asking because I have never seen such a solution so I'm wondering if it is real (It's kinda hard to implement this into my PHP framework so I'm asking experts here first) To sum it up here's a code skeleton: <script> var emailCount = 1000; //would get this from DB var runCount = 20; //number of emails sent in one cycle var from = 0; //start number function sendMail(){ if(from<emailCount){ jQuery.ajaxfunction({ path: 'script.php?from='+from+'&count='+runCount successFc: function(){ from+=runCount; sendMail(); } }) } } sendMail(); </script> So, are there any obstacles? Thanks a lot.

    Read the article

  • Can anyone give me tips how to solve this using Graphs in C or Java?

    - by peiska
    Can anyone give me tips how to solve this using Graphs in C or Java? I have a rectangular sector, that I have to escape, and I have energy that goes disappear every step that i give in the area. I have to give the only one possible solution, the one that uses the least number of steps. If there are at least two sectors with the same number of steps (X1, Y1) and (X2, Y2) then choose the first if X1 < X2 or if X1 = X2 and Y1 < Y2. the position( 1,1) corresponds to the upper left corner. Examples: This is one sector,and i start with 40 of energy and in the position (3,3) 12 11 12 11 3 12 12 12 11 11 12 2 1 13 11 11 12 2 13 2 14 10 11 13 3 2 1 12 10 11 13 13 11 12 13 12 12 11 13 11 13 12 13 12 12 11 11 11 11 13 13 10 10 13 11 12 the best solution to exit the sector is the position (5, 1) the remain energy is 12 and i need 8 steeps to leave the area. for this sector i start with 8 of energy and in the position (3,4). 4 3 3 2 2 3 2 2 5 2 2 2 3 3 2 1 2 2 3 2 2 4 3 3 2 2 4 1 3 1 4 3 2 3 1 2 2 3 3 0 3 4 And for this one there is no way out, cause it looses all the energy.

    Read the article

  • Algorithm for count-down timer that can add on time

    - by Person
    I'm making a general timer that has functionality to count up from 0 or count down from a certain number. I also want it to allow the user to add and subtract time. Everything is simple to implement except for the case in which the timer is counting down from some number, and the user adds or subtracts time from it. For example: (m_clock is an instance of SFML's Clock) float Timer::GetElapsedTime() { if ( m_forward ) { m_elapsedTime += m_clock.GetElapsedTime() - m_elapsedTime; } else { m_elapsedTime -= m_elapsedTime - m_startingTime + m_clock.GetElapsedTime(); } return m_elapsedTime; } To be a bit more clear, imagine that the timer starts at 100 counting down. After 10 seconds, the above function would look like 100 -= 100 - 100 + 10 which equals 90. If it was called after 20 more seconds it would look like 90 -= 90 - 100 + 30 which equals 70. This works for normal counting, but if the user calls AddTime() ( just m_elapsedTime += arg ) then the algorithm for backwards counting fails miserably. I know that I can do this using more members and keeping track of previous times, etc. but I'm wondering whether I'm missing some implementation that is extremely obvious. I'd prefer to keep it as simple as possible in that single operation.

    Read the article

  • need help fixing unique key in rails. rails is adding id causing duplicate key

    - by railsnew
    I need some help in fixing the below issue. I had transaction blocks in my rails code like below: @sqlcontact = "INSERT INTO contacts (id,\"cid\", \"hphone\", mphone, provider, cemail, email, sms , mail, phone) VALUES ('"+@id1+"','" + @id1 + "', '"+ params[:hphone] + "', '"+params[:mphone]+ "', '" + params[:provider] + "', '" + params[:cemail]+ "', '" + @varemail+ "', '"+@varsms+ "', '"+ @varmail+"', '"+@varphone+"')" my app was deployed to heroku so I was advised by them to remove transaction blocks. So I changed the above to: @cont = Contact.new(:id => @id1, :cid => @id1, :hphone => params[:hphone], :mphone => params[:mphone], :provider => params[:provider], :cemail => params[:cemail], :email => @varemail, :sms => @varsms, :mail => @varmail, :phone => @varphone) @cont.save My app also already had data stored. Now the problem is that when I try to save a record ...I keep getting the error: duplicate key value violates unique constraint "contacts_pkey" The error also shows the sql query trying to insert data ...however, in that sql query i Do not see id value. As you can see from my code that I am passing the id. then why is rails not accepting it? does it always include its own sequential id? can I not overwrite the default rails magic? and if it does that...does it not look at data that is already in the DB?? I am really stuck here. What should I do? should I just go back to my transaction block

    Read the article

  • How do I create efficient instance variable mutators in Matlab?

    - by Trent B
    Previously, I implemented mutators as follows, however it ran spectacularly slowly on a recursive OO algorithm I'm working on, and I suspected it may have been because I was duplicating objects on every function call... is this correct? %% Example Only obj2 = tripleAllPoints(obj1) obj.pts = obj.pts * 3; obj2 = obj1 end I then tried implementing mutators without using the output object... however, it appears that in MATLAB i can't do this - the changes won't "stick" because of a scope issue? %% Example Only tripleAllPoints(obj1) obj1.pts = obj1.pts * 3; end For application purposes, an extremely simplified version of my code (which uses OO and recursion) is below. classdef myslice properties pts % array of pts nROW % number of rows nDIM % number of dimensions subs % sub-slices end % end properties methods function calcSubs(obj) obj.subs = cell(1,obj.nROW); for i=1:obj.nROW obj.subs{i} = myslice; obj.subs{i}.pts = obj.pts(1:i,2:end); end end function vol = calcVol(obj) if obj.nROW == 1 obj.volume = prod(obj.pts); else obj.volume = 0; calcSubs(obj); for i=1:obj.nROW obj.volume = obj.volume + calcVol(obj.subs{i}); end end end end % end methods end % end classdef

    Read the article

  • Every flash uploader giving bad progress values.

    - by Mike Boers
    The file upload script I wrote early last year for an internal website has been misbehaving oddly on a number of machines. On some machines it consistently works fine, on others it consistently misbehaves. I am having exactly the same problem with YUI Uploader, SWFUpload (2.2 and 2.5a), and Uploadify. On the misbehaving machines, the progress event (or callback as the case may be) is reporting the upload going far too quickly. It is progressing around 9 or 10MB/s, instead of the 50 or 60kb/s that is actually going on. The progress bar fills up very quickly, and then no more progress events are triggered. A few minutes later the completion event will trigger when the upload is actually done. I must emphasize that the file upload does proceed normally, even though the progress being reported is very wrong. The progress events are reporting a correct file size, but the reported amount uploaded is usually way too high, and it appears that it is always a multiple of 2^16 (65536). I'm only having this problem with Firefox 3.5 on Windows XP, all of which have various subversions of Flash 10. Has anyone heard of this happening, or have any idea what is going on? (I'm off to go file a number of bug reports, but hopefully someone here has some previous experience with this.)

    Read the article

  • Growing user control not updating

    - by user328259
    I am developing in C# and .Net 2.0. I have a user control that draws cells (columnar) depending upon the maximum number of cells. There are some drawing routines that generate the necessary cells. There is a property NumberOfCells that adjust the height of this control; CELLHEIGHT_CONSTANT * NumberOfCells. The OnPaint() method is overridden (code that draws the Number of cells). There is another user control that contains a panel which contains the userControl1 from above. There is a property NumberCells that changes userControl1's NumberOfCells. UserControl2 is then placed on a windows form. On that form there is a NumericUpDown control (only increments from 1). When the user increments by 1, I adjust the VerticalScroll.Maximum by 1 as well. Everything works well and good BUT when I increment once, the panel updates fine (inserts a vertical scrolll when necessary) but cells are not being added! I've tried Invalidating on userControl2 AND on the form but nothing seems to draw the newly added cells. Any assistance is appreciated. Thank you in advance. Lawrence

    Read the article

  • Unit testing with Mocks. Test behaviour not implementation

    - by Kenny Eliasson
    Hi.. I always had a problem when unit testing classes that calls other classes, for example I have a class that creates a new user from a phone-number then saves it to the database and sends a SMS to the number provided. Like the code provided below. public class UserRegistrationProcess : IUserRegistration { private readonly IRepository _repository; private readonly ISmsService _smsService; public UserRegistrationProcess(IRepository repository, ISmsService smsService) { _repository = repository; _smsService = smsService; } public void Register(string phone) { var user = new User(phone); _repository.Save(user); _smsService.Send(phone, "Welcome", "Message!"); } } It is a really simple class but how would you go about and test it? At the moment im using Mocks but I dont really like it [Test] public void WhenRegistreringANewUser_TheNewUserIsSavedToTheDatabase() { var repository = new Mock<IRepository>(); var smsService = new Mock<ISmsService>(); var userRegistration = new UserRegistrationProcess(repository.Object, smsService.Object); var phone = "0768524440"; userRegistration.Register(phone); repository.Verify(x => x.Save(It.Is<User>(user => user.Phone == phone)), Times.Once()); } [Test] public void WhenRegistreringANewUser_ItWillSendANewSms() { var repository = new Mock<IRepository>(); var smsService = new Mock<ISmsService>(); var userRegistration = new UserRegistrationProcess(repository.Object, smsService.Object); var phone = "0768524440"; userRegistration.Register(phone); smsService.Verify(x => x.Send(phone, It.IsAny<string>(), It.IsAny<string>()), Times.Once()); } It feels like I am testing the wrong thing here? Any thoughts on how to make this better?

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • Very slow guards in my monadic random implementation (haskell)

    - by danpriduha
    Hi! I was tried to write one random number generator implementation, based on number class. I also add there Monad and MonadPlus instance. What mean "MonadPlus" and why I add this instance? Because of I want to use guards like here: -- test.hs -- import RandomMonad import Control.Monad import System.Random x = Rand (randomR (1 ::Integer, 3)) ::Rand StdGen Integer y = do a <-x guard (a /=2) guard (a /=1) return a here comes RandomMonad.hs file contents: -- RandomMonad.hs -- module RandomMonad where import Control.Monad import System.Random import Data.List data RandomGen g => Rand g a = Rand (g ->(a,g)) | RandZero instance (Show g, RandomGen g) => Monad (Rand g) where return x = Rand (\g ->(x,g)) (RandZero)>>= _ = RandZero (Rand argTransformer)>>=(parametricRandom) = Rand funTransformer where funTransformer g | isZero x = funTransformer g1 | otherwise = (getRandom x g1,getGen x g1) where x = parametricRandom val (val,g1) = argTransformer g isZero RandZero = True isZero _ = False instance (Show g, RandomGen g) => MonadPlus (Rand g) where mzero = RandZero RandZero `mplus` x = x x `mplus` RandZero = x x `mplus` y = x getRandom :: RandomGen g => Rand g a ->g ->a getRandom (Rand f) g = (fst (f g)) getGen :: RandomGen g => Rand g a ->g -> g getGen (Rand f) g = snd (f g) when I run ghci interpreter, and give following command getRandom y (mkStdGen 2000000000) I can see memory overflow on my computer (1G). It's not expected, and if I delete one guard, it works very fast. Why in this case it works too slow? What I do wrong?

    Read the article

  • How to Declare Complex Nested C# Type for Web Service

    - by TheArtTrooper
    I would like to create a service that accepts a complex nested type. In a sample asmx file I created: [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] // To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line. // [System.Web.Script.Services.ScriptService] public class ServiceNest : System.Web.Services.WebService { public class Block { [XmlElement(IsNullable = false)] public int number; } public class Cell { [XmlElement(IsNullable = false)] public Block block; } public class Head { [XmlElement(IsNullable = false)] public Cell cell; } public class Nest { public Head head; } [WebMethod] public void TakeNest(Nest nest) { } } When I view the asmx file in IE the test page shows the example SOAP post request as: <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <TakeNest xmlns="http://schemas.intellicorp.com/livecompare/"> <nest> <head> <cell> <block xsi:nil="true" /> </cell> </head> </nest> </TakeNest> </soap:Body> </soap:Envelope> It hasn't expanded the <block> into its number member. Looking at the WSDL, the types all look good. So is this just a limitation of the post demo page creator? Thanks.

    Read the article

  • CEIL is one too high for exact integer divisions

    - by Synetech
    This morning I lost a bunch of files, but because the volume they were one was both internally and externally defragmented, all of the information necessary for a 100% recovery is available; I just need to fill in the FAT where required. I wrote a program to do this and tested it on a copy of the FAT that I dumped to a file and it works perfectly except that for a few of the files (17 out of 526), the FAT chain is one single cluster too long, and thus cross-linked with the next file. Fortunately I know exactly what the problem is. I used ceil in my EOF calculation because even a single byte over will require a whole extra cluster: //Cluster is the starting cluster of the file //Size is the size (in bytes) of the file //BPC is the number of bytes per cluster //NumClust is the number of clusters in the file //EOF is the last cluster of the file’s FAT chain DWORD NumClust = ceil( (float)(Size / BPC) ) DWORD EOF = Cluster + NumClust; This algorithm works fine for everything except files whose size happens to be exactly a multiple of the cluster size, in which case they end up being one cluster too much. I thought about it for a while but am at a loss as to a way to do this. It seems like it should be simple but somehow it is surprisingly tricky. What formula would work for files of any size?

    Read the article

  • How to index a table with a Type 2 slowly changing dimension for optimal performance

    - by The Lazy DBA
    Suppose you have a table with a Type 2 slowly-changing dimension. Let's express this table as follows, with the following columns: * [Key] * [Value1] * ... * [ValueN] * [StartDate] * [ExpiryDate] In this example, let's suppose that [StartDate] is effectively the date in which the values for a given [Key] become known to the system. So our primary key would be composed of both [StartDate] and [Key]. When a new set of values arrives for a given [Key], we assign [ExpiryDate] to some pre-defined high surrogate value such as '12/31/9999'. We then set the existing "most recent" records for that [Key] to have an [ExpiryDate] that is equal to the [StartDate] of the new value. A simple update based on a join. So if we always wanted to get the most recent records for a given [Key], we know we could create a clustered index that is: * [ExpiryDate] ASC * [Key] ASC Although the keyspace may be very wide (say, a million keys), we can minimize the number of pages between reads by initially ordering them by [ExpiryDate]. And since we know the most recent record for a given key will always have an [ExpiryDate] of '12/31/9999', we can use that to our advantage. However... what if we want to get a point-in-time snapshot of all [Key]s at a given time? Theoretically, the entirety of the keyspace isn't all being updated at the same time. Therefore for a given point-in-time, the window between [StartDate] and [ExpiryDate] is variable, so ordering by either [StartDate] or [ExpiryDate] would never yield a result in which all the records you're looking for are contiguous. Granted, you can immediately throw out all records in which the [StartDate] is greater than your defined point-in-time. In essence, in a typical RDBMS, what indexing strategy affords the best way to minimize the number of reads to retrieve the values for all keys for a given point-in-time? I realize I can at least maximize IO by partitioning the table by [Key], however this certainly isn't ideal. Alternatively, is there a different type of slowly-changing-dimension that solves this problem in a more performant manner?

    Read the article

  • PHP + MYSQLI: Variable parameter/result binding with prepared statements.

    - by Brian Warshaw
    In a project that I'm about to wrap up, I've written and implemented an object-relational mapping solution for PHP. Before the doubters and dreamers cry out "how on earth?", relax -- I haven't found a way to make late static binding work -- I'm just working around it in the best way that I possibly can. Anyway, I'm not currently using prepared statements for querying, because I couldn't come up with a way to pass a variable number of arguments to the bind_params() or bind_result() methods. Why do I need to support a variable number of arguments, you ask? Because the superclass of my models (think of my solution as a hacked-up PHP ActiveRecord wannabe) is where the querying is defined, and so the find() method, for example, doesn't know how many parameters it would need to bind. Now, I've already thought of building an argument list and passing a string to eval(), but I don't like that solution very much -- I'd rather just implement my own security checks and pass on statements. Does anyone have any suggestions (or success stories) about how to get this done? If you can help me solve this first problem, perhaps we can tackle binding the result set (something I suspect will be more difficult, or at least more resource-intensive if it involves an initial query to determine table structure).

    Read the article

  • Postgre database ignoring created index ?!

    - by drasto
    I have an Postgre database and a table called my_table. There are 4 columns in that table (id, column1, column2, column3). The id column is primary key, there are no other constrains or indexes on columns. The table has about 200000 rows. I want to print out all rows which has value of column column2 equal(case insensitive) to 'value12'. I use this: SELECT * FROM my_table WHERE column2 = lower('value12') here is the execution plan for this statement(result of set enable_seqscan=on; EXPLAIN SELECT * FROM my_table WHERE column2 = lower('value12')): Seq Scan on my_table (cost=0.00..4676.00 rows=10000 width=55) Filter: ((column2)::text = 'value12'::text) I consider this to be to slow so I create an index on column column2 for better prerformance of searches: CREATE INDEX my_index ON my_table (lower(column2)) Now I ran the same select: SELECT * FROM my_table WHERE column2 = lower('value12') and I expect it to be much faster because it can use index. However it is not faster, it is as slow as before. So I check the execution plan and it is the same as before(see above). So it still uses sequential scen and it ignores the index! Where is the problem ?

    Read the article

  • Generic callbacks

    - by bobobobo
    Extends So, I'm trying to learn template metaprogramming better and I figure this is a good exercise for it. I'm trying to write code that can callback a function with any number of arguments I like passed to it. // First function to call int add( int x, int y ) ; // Second function to call double square( double x ) ; // Third func to call void go() ; The callback creation code should look like: // Write a callback object that // will be executed after 42ms for "add" Callback<int, int, int> c1 ; c1.func = add ; c1.args.push_back( 2 ); // these are the 2 args c1.args.push_back( 5 ); // to pass to the "add" function // when it is called Callback<double, double> c2 ; c2.func = square ; c2.args.push_back( 52.2 ) ; What I'm thinking is, using template metaprogramming I want to be able to declare callbacks like, write a struct like this (please keep in mind this is VERY PSEUDOcode) <TEMPLATING ACTION <<ANY NUMBER OF TYPES GO HERE>> > struct Callback { double execTime ; // when to execute TYPE1 (*func)( TYPE2 a, TYPE3 b ) ; void* argList ; // a stored list of arguments // to plug in when it is time to call __func__ } ; So for when called with Callback<int, int, int> c1 ; You would automatically get constructed for you by < HARDCORE TEMPLATING ACTION > a struct like struct Callback { double execTime ; // when to execute int (*func)( int a, int b ) ; void* argList ; // this would still be void*, // but I somehow need to remember // the types of the args.. } ; Any pointers in the right direction to get started on writing this?

    Read the article

  • Better way to summarize data about stop times?

    - by Vimvq1987
    This question is close to this: http://stackoverflow.com/questions/2947963/find-the-period-of-over-speed Here's my table: Longtitude Latitude Velocity Time 102 401 40 2010-06-01 10:22:34.000 103 403 50 2010-06-01 10:40:00.000 104 405 0 2010-06-01 11:00:03.000 104 405 0 2010-06-01 11:10:05.000 105 406 35 2010-06-01 11:15:30.000 106 403 60 2010-06-01 11:20:00.000 108 404 70 2010-06-01 11:30:05.000 109 405 0 2010-06-01 11:35:00.000 109 405 0 2010-06-01 11:40:00.000 105 407 40 2010-06-01 11:50:00.000 104 406 30 2010-06-01 12:00:00.000 101 409 50 2010-06-01 12:05:30.000 104 405 0 2010-06-01 11:05:30.000 I want to summarize times when vehicle had stopped (velocity = 0), include: it had stopped since "when" to "when" in how much minutes, how many times it stopped and how much time it stopped. I wrote this query to do it: select longtitude, latitude, MIN(time), MAX(time), DATEDIFF(minute, MIN(Time), MAX(time)) as Timespan from table_1 where velocity = 0 group by longtitude,latitude select DATEDIFF(minute, MIN(Time), MAX(time)) as minute into #temp3 from table_1 where velocity = 0 group by longtitude,latitude select COUNT(*) as [number]from #temp select SUM(minute) as [totaltime] from #temp3 drop table #temp This query return: longtitude latitude (No column name) (No column name) Timespan 104 405 2010-06-01 11:00:03.000 2010-06-01 11:10:05.000 10 109 405 2010-06-01 11:35:00.000 2010-06-01 11:40:00.000 5 number 2 totaltime 15 You can see, it works fine, but I really don't like the #temp table. Is there anyway to query this without use a temp table? Thank you.

    Read the article

  • ASP.Net Gridview paging, pageindex always == 0.

    - by David Archer
    Hi all, Having a slight problem with my ASP.Net 3.5 app. I'm trying to get the program to pick up what page number has been clicked. I'm using ASP.Net's built in AllowPaging="True" function. It's never the same without code, so here it is: ASP.Net: <asp:GridView ID="GridView1" runat="server" CellPadding="4" ForeColor="#333333" GridLines="Vertical" Width="960px" AllowSorting="True" EnableSortingAndPagingCallbacks="True" AllowPaging="True" PageSize="25" > <RowStyle BackColor="#F7F6F3" ForeColor="#333333" /> <FooterStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <PagerStyle BackColor="#284775" ForeColor="White" HorizontalAlign="Center" /> <SelectedRowStyle BackColor="#E2DED6" Font-Bold="True" ForeColor="#333333" /> <HeaderStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <EditRowStyle BackColor="#999999" /> <AlternatingRowStyle BackColor="White" ForeColor="#284775" /> </asp:GridView> C#: var fillTable = from ft in db.IncidentDatas where ft.pUserID == Convert.ToInt32(ClientList.SelectedValue.ToString()) select new { Reference = ft.pRef.ToString(), Date = ft.pIncidentDateTime.Value.Date.ToShortDateString(), Time = ft.pIncidentDateTime.Value.TimeOfDay, Premesis = ft.pPremises.ToString(), Latitude = ft.pLat.ToString(), Longitude = ft.pLong.ToString() }; if (fillTable.Count() > 0) { GridView1.DataSource = fillTable; GridView1.DataBind(); var IncidentDetails = fillTable.ToList(); for (int i = 0; i < IncidentDetails.Count(); i++) { int pageno = GridView1.PageIndex; int pagenostart = pageno * 25; if (i >= pagenostart && i < (pagenostart + 25)) { //Processing } } } Any idea why GridView1.PageIndex is always = 0? The thing is, the processing works correctly for the grid view.... it will always go to the correct paging page, but it's always 0 when I try to get the number. Help!

    Read the article

  • Trouble with an depreciated constructor visual basic visual studio 2010

    - by VBPRIML
    My goal is to print labels with barcodes and a date stamp from an entry to a zebra TLP 2844 when the user clicks the ok button/hits enter. i found what i think might be the code for this from zebras site and have been integrating it into my program but part of it is depreciated and i cant quite figure out how to update it. below is what i have so far. The printer is attached via USB and the program will also store the entered numbers in a database but i have that part done. any help would be greatly Appreciated.   Public Class ScanForm      Inherits System.Windows.Forms.Form    Public Const GENERIC_WRITE = &H40000000    Public Const OPEN_EXISTING = 3    Public Const FILE_SHARE_WRITE = &H2      Dim LPTPORT As String    Dim hPort As Integer      Public Declare Function CreateFile Lib "kernel32" Alias "CreateFileA" (ByVal lpFileName As String,                                                                           ByVal dwDesiredAccess As Integer,                                                                           ByVal dwShareMode As Integer, <MarshalAs(UnmanagedType.Struct)> ByRef lpSecurityAttributes As SECURITY_ATTRIBUTES,                                                                           ByVal dwCreationDisposition As Integer, ByVal dwFlagsAndAttributes As Integer,                                                                           ByVal hTemplateFile As Integer) As Integer          Public Declare Function CloseHandle Lib "kernel32" Alias "CloseHandle" (ByVal hObject As Integer) As Integer      Dim retval As Integer           <StructLayout(LayoutKind.Sequential)> Public Structure SECURITY_ATTRIBUTES          Private nLength As Integer        Private lpSecurityDescriptor As Integer        Private bInheritHandle As Integer      End Structure            Private Sub OKButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles OKButton.Click          Dim TrNum        Dim TrDate        Dim SA As SECURITY_ATTRIBUTES        Dim outFile As FileStream, hPortP As IntPtr          LPTPORT = "USB001"        TrNum = Me.ScannedBarcodeText.Text()        TrDate = Now()          hPort = CreateFile(LPTPORT, GENERIC_WRITE, FILE_SHARE_WRITE, SA, OPEN_EXISTING, 0, 0)          hPortP = New IntPtr(hPort) 'convert Integer to IntPtr          outFile = New FileStream(hPortP, FileAccess.Write) 'Create FileStream using Handle        Dim fileWriter As New StreamWriter(outFile)          fileWriter.WriteLine(" ")        fileWriter.WriteLine("N")        fileWriter.Write("A50,50,0,4,1,1,N,")        fileWriter.Write(Chr(34))        fileWriter.Write(TrNum) 'prints the tracking number variable        fileWriter.Write(Chr(34))        fileWriter.Write(Chr(13))        fileWriter.Write(Chr(10))        fileWriter.Write("A50,100,0,4,1,1,N,")        fileWriter.Write(Chr(34))        fileWriter.Write(TrDate) 'prints the date variable        fileWriter.Write(Chr(34))        fileWriter.Write(Chr(13))        fileWriter.Write(Chr(10))        fileWriter.WriteLine("P1")        fileWriter.Flush()        fileWriter.Close()        outFile.Close()        retval = CloseHandle(hPort)          'Add entry to database        Using connection As New SqlClient.SqlConnection("Data Source=MNGD-LABS-APP02;Initial Catalog=ScannedDB;Integrated Security=True;Pooling=False;Encrypt=False"), _        cmd As New SqlClient.SqlCommand("INSERT INTO [ScannedDBTable] (TrackingNumber, Date) VALUES (@TrackingNumber, @Date)", connection)            cmd.Parameters.Add("@TrackingNumber", SqlDbType.VarChar, 50).Value = TrNum            cmd.Parameters.Add("@Date", SqlDbType.DateTime, 8).Value = TrDate            connection.Open()            cmd.ExecuteNonQuery()            connection.Close()        End Using          'Prepare data for next entry        ScannedBarcodeText.Clear()        Me.ScannedBarcodeText.Focus()      End Sub

    Read the article

  • KODO: how set up fetch plan for bidirectional relationships?

    - by BestPractices
    Running KODO 4.2 and having an issue inefficient queries being generated by KODO. This happens when fetching an object that contains a collection where that collection has a bidrectional relationship back to the first object. Class Classroom { List<Student> _students; } Class Student { Classroom _classroom; } If we create a fetch plan to get a list of Classrooms and their corresponding Students by setting up the following fetch plan: fetchPlan.addField(Classroom.class,”_students”); This will result in two queries (get the classrooms and then get all students that are in those classrooms), which is what we would expect. However, if we include the reference back to the classroom in our fetch plan in order for the _classroom field to get populated by doing fetchPlan.addField(Student.class, “_classroom”), this will result in X number of additional queries where X is the number of students in each classroom. Can anyone explain how to fix this? KODO already has the original Classroom objects at the point that it's executing the queries to retrieve the Classroom objects and set them in each Student object's _classroom field. So I would expect KODO to simply set those objects in the _classroom field on each Student object accordingly and not go back to the database. Once again, the documentation is sorely lacking with Kodo/JDO/OpenJPA but from what I've read it should be able to do this more efficiently. Note-- EAGER_FETCH.PARALLEL is turned on and I have tried this with caching (query and data caches) turned on and off and there is no difference in the resultant queries.

    Read the article

  • forward invocation, by hand vs magically?

    - by John Smith
    I have the following two class: //file FruitTree.h @interface FruitTree : NSObject { Fruit * f; Leaf * l; } @end //file FruitTree.m @implementation FruitTree //here I get the number of seeds from the object f @end //file Fruit @interface Fruit : NSObject { int seeds; } -(int) countfruitseeds; @end My question is at the point of how I request the number of seeds from f. I have two choices. Either: Since I know f I can explicitly call it, i.e. I implement the method -(int) countfruitseeds { return [f countfruitseeds]; } Or: I can just use forwardInvocation: - (NSMethodSignature *)methodSignatureForSelector:(SEL)selector { // does the delegate respond to this selector? if ([f respondsToSelector:selector]) return [f methodSignatureForSelector:selector]; else if ([l respondsToSelector:selector]) return [l methodSignatureForSelector:selector]; else return [super methodSignatureForSelector: selector]; } - (void)forwardInvocation:(NSInvocation *)invocation { [invocation invokeWithTarget:f]; } (Note this is only a toy example to ask my question. My real classes have lots of methods, which is why I am asking.) Which is the better/faster method?

    Read the article

  • How to reference other documents in a couchDB view (joining like functionality)

    - by Surfrdan
    We have a CouchDB representation of an XML database which we use to power a javascript based frontend for manipulating the XML documents. The basic structure is a simple 3 level hierachy. i.e. A - B - C A: Parent doucument (type A) B: any number of child documents of parent type A C: any number of child documents of parent type B We represent these 3 document types in CouchDB with a 'type' attribute: e.g. { "_id":"llgc-id:433", "_rev":"1-3760f3e01d7752a7508b047e0d094301", "type":"A", "label":"Top Level A document", "logicalMap":{ "issues":{ "1":{ "URL":"http://hdl.handle.net/10107/434-0", "FILE":"llgc-id:434" }, "2":{ "URL":"http://hdl.handle.net/10107/467-0", "FILE":"llgc-id:467" etc... } } } } { "_id":"llgc-id:433", "_rev":"1-3760f3e01d7752a7508b047e0d094301", "type":"B", "label":"a B document", } What I want to do is produce a view which returns documents just like the A type but includes the label attribute from the B document within the logicalMap list e.g. { "_id":"llgc-id:433", "_rev":"1-3760f3e01d7752a7508b047e0d094301", "type":"A", "label":"Top Level A document", "logicalMap":{ "issues":{ "1":{ "URL":"http://hdl.handle.net/10107/434-0", "FILE":"llgc-id:434", "LABEL":"a B document" }, "2":{ "URL":"http://hdl.handle.net/10107/467-0", "FILE":"llgc-id:467", "LABEL":"another B document" etc... } } } } I'm struggling to get my head around the best way to perform this. It looks like it should be fairly simple though!

    Read the article

  • Calculate the retrieved rows in database Visual C#

    - by Tanya Lertwichaiworawit
    I am new in Visual C# and would want to know how to calculate the retrieved data from a database. Using the above GUI, when "Calculate" is click, the program will display the number of students in textBox1, and the average GPA of all students in textBox2. Here is my database table "Students": I was able to display the number of students but I'm still confused to how I can calculate the average GPA Here's my code: private void button1_Click(object sender, EventArgs e) { string connection = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Database1.accdb"; OleDbConnection connect = new OleDbConnection(connection); string sql = "SELECT * FROM Students"; connect.Open(); OleDbCommand command = new OleDbCommand(sql, connect); DataSet data = new DataSet(); OleDbDataAdapter adapter = new OleDbDataAdapter(command); adapter.Fill(data, "Students"); textBox1.Text = data.Tables["Students"].Rows.Count.ToString(); double gpa; for (int i = 0; i < data.Tables["Students"].Rows.Count; i++) { gpa = Convert.ToDouble(data.Tables["Students"].Rows[i][2]); } connect.Close(); }

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >