Search Results

Search found 16987 results on 680 pages for 'second'.

Page 508/680 | < Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >

  • Problem in print layout near page end

    - by Miraaj
    Hi all, I am facing some problem in print layout, below is the description of steps I followed and problem which I am facing: I have taken a custom view over which there are NSTextViews, NSTableViews arranged one below other. I am trying to calculate exact height of NSTextViews and NSTableViews depending upon content in them. Depending upon calculated height I am arranging them in super-custom view. Then I am printing the view, using this code : [self arrangeBriefLayoutDynamically]; // step 2nd and 3rd // setting fixed parameters for printing NSPrintInfo * printInfo = [NSPrintInfo sharedPrintInfo]; [printInfo setVerticallyCentered:NO]; [printInfo setRightMargin:12.0]; [printInfo setTopMargin:37.0]; [printInfo setLeftMargin:12.0]; [printInfo setHorizontallyCentered:YES]; [printInfo setHorizontalPagination:NSFitPagination]; [printInfo setVerticalPagination:NSAutoPagination]; [printInfo setPaperName:@"na-letter"]; [printInfo setOrientation:NSPortraitOrientation]; PMSetScale([printInfo PMPageFormat], 100.0); [NSPrintInfo setSharedPrintInfo:printInfo]; [briefCompleteView print:nil]; Problem is : when size of a table view or text view exceeds, such that it crosses the page boundary then SOMETIMES text near boundary appears improper i.e.. part of its height lies on first page and rest of it lies on second page. Click to check problem ! Can anyone suggest me some way to resolve it ? Thanks, Miraaj

    Read the article

  • Weird behaviour of C++ destructors

    - by Vilx-
    #include <iostream> #include <vector> using namespace std; int main() { vector< vector<int> > dp(50000, vector<int>(4, -1)); cout << dp.size(); } This tiny program takes a split second to execute when simply run from the command line. But when run in a debugger, it takes over 8 seconds. Pausing the debugger reveals that it is in the middle of destroying all those vectors. WTF? Note - Visual Studio 2008 SP1, Core 2 Duo 6700 CPU with 2GB of RAM. Added: To clarify, no, I'm not confusing Debug and Release builds. These results are on one and the same .exe, without even any recompiling inbetween. In fact, switching between Debug and Release builds changes nothing.

    Read the article

  • Can't select data from MySQL database: java.lang.NullPointerException

    - by Devel
    Hi, I'm trying to select data from database using this code: //DATABASE ResultSet rs; String polecenie; Statement st; String[] subj; public void polacz() { try { Class.forName("com.mysql.jdbc.Driver"); Connection pol=DriverManager.getConnection("jdbc:mysql://localhost:3306/testgenerator", "root", "pospaz"); st = pol.createStatement(); lblPolaczonoZBaza.setText("Polaczono z baza danych testgenerator"); } catch (Exception ek) { statusMessageLabel.setText("Can't connect to d: "+ek); } polecenie = "select * from subjects"; try { rs = st.executeQuery(polecenie); int i=0; while (rs.next()){ subj[i] = rs.getString("name"); i++; } st.close(); } catch (Exception ek) { statusMessageLabel.setText("Can't select data: "+ek); } } The second catch shows exception: java.lang.NullPointerException I looked everywhere and I can't find the solution. I'd be grateful for any help.

    Read the article

  • maintaining continuous count in php

    - by LiveEn
    I have a small problem maintain a count for the position. i have written a function function that will select all the users within a page and positions them in the order. Eg: Mike Position 1 Steve Postion 2.............. .... Jacob Position 30 but the problem that i have when i move to the second page, the count is started from first Eg: Jenny should be number 31 but the list goes, Jenny Position 1 Tanya Position 2....... Below is my function function nrk($duty,$page,$position) { $url="http://www.test.com/people.php?q=$duty&start=$page"; $ch=curl_init(); curl_setopt($ch,CURLOPT_URL,$url); $result=curl_exec($ch); $dom = new DOMDocument(); @$dom->loadHTML($result); $xpath=new DOMXPath($dom); $elements = $xpath->evaluate("//div"); foreach ($elements as $element) { $name = $element->getElementsByTagName("name")->item(0)->nodeValue; $position=$position+1; echo $name." Position:".$position."<br>"; } return $position; } Below is the for loop where i try to loop thru the page count for ($page=0;$page<=$pageNumb;$page=$page + 10) { nrk($duty,$page,$position); } I dont want to maintain a array key value in the for each coz i drop certain names...

    Read the article

  • DelphiTwain how to show form setting

    - by Erwan
    Hi, I'm using Delphitwain (delphitwain.sourceforge.net) to add scan functionality to my app. Everything was fine, when i click scan button on my app it will show scan mode with scanner's Properties such as Page Size, Scanning Side (canon dr-3010c) and there is a Scan button and Cancel button. If i click cancel of course all the properties back to it's value before. How can I show this Scanner's Properties only to change properties without Scan, since i can do scan without showing properties Twain.LoadLibrary; Twain.LoadSourceManager; Twain.Source[CurrentSource].Loaded := TRUE; Twain.Source[CurrentSource].TransferMode := TTwainTransferMode(0); Twain.Source[CurrentSource].EnableSource(True, True); while Twain.Source[CurrentSource].Enabled do Application.ProcessMessages; Twain.UnloadLibrary; Twain.Source[CurrentSource].EnableSource(True, True); The first True for ShowUI and the second True for Modal I know it can be achieved 'cos i've seen another application that can show scanner's properties without scan, only OK and Cancel button, i've searched google all over but no luck, or maybe it just the limitation of the delphitwain component? Thanks, any suggestion appreciated

    Read the article

  • ORGetValue from Offline Registry - ERROR_MORE_DATA

    - by user314749
    I am trying to create an offline registry in memory using the offreg.dll provided in the windows ddk 7 package. You can find out more information on the offreg.dll here: MSDN Currently, while attempting to read a value from an open registry hive / key I receive the following error: 234 or ERROR_MORE_DATA Here is the .h code that contains ORGetValue: DWORD ORAPI ORGetValue ( __in ORHKEY Handle, __in_opt PCWSTR lpSubKey, __in_opt PCWSTR lpValue, __out_opt PDWORD pdwType, __out_bcount_opt(*pcbData) PVOID pvData, __inout_opt PDWORD pcbData ); Here is the code that I am using to pull the data [DllImport("offreg.dll", CharSet = CharSet.Auto, EntryPoint = "ORGetValue", SetLastError = true, CallingConvention = CallingConvention.StdCall)] public static extern uint ORGetValue(IntPtr Handle, string lpSubKey, string lpValue, out uint pdwType, out string pvData, out uint pcbData); IntPtr myHive; IntPtr myKey; string myValue; uint pdwtype; uint pcbdata; uint ret3 = ORGetValue(myKey, "", "DefaultUserName", out pdwtype, out myValue, out pcbdata); The goal is to be able to read myValue as a string. I am not sure if I need to use marshaling... or a second call with an adjusted buffer.. Or really how to adjust the buffer in C#. Any help or pointers would be greatly appreciated. Thank you.

    Read the article

  • Perl: Process Communcation

    - by Shiftbit
    Can anyone explain how I can successfully get my process communicating. I find the perldoc on IPC confusing. What I have so far is: $| = 1; $SIG{CHLD} = {wait}; my $parentPid = $$; if ($pid = fork();) ) { if ($pid == 0) { pipe($parentPid, $$); open PARENT, "<$parentPid"; while (<PARENT>) { print $_; } close PARENT; exit(); } else { pipe($parentPid, $pid); open CHILD, ">$pid"; or error("\nError opening: childPid\nRef: $!\n"); open (FH, "<list") or error("\nError opening: list\nRef: $!\n"); while(<FH>) { print CHILD, $_; } close FH or error("\nError closing: list\nRef: $!\n"); close CHILD or error("\nError closing: childPid\nRef: $!\n); } else { error("\nError forking\nRef: $!\n"); } First Question is what does Perldoc pipe mean by READHANDLE, WRITEHANDLE? Second Questions is can I implement a solution without relying on CPAN or other modules?

    Read the article

  • Load Balancing of PHP/MYSQL script without big code changes

    - by DR.GEWA
    Sorry for my Dummy Question, but... I am making a script on php/mysql (codeigniter) and I am extremally interested in knowing if there is a way without big architectural changes of the script make a load balancing. I mean, that for example now I will rent a medium dedicated server with 2GB ram, 200GB memory and good processor, and this will be enough lets say half year for the users which will come. But when they will become more and more, and as its a social net and at nights the server is waiting to have 500-1500 or 5000-8000 users online, I wander if there is a way for lets say just add second server with some config which will bear next pressure. After again one and so on... ???? <? if($answer=YES) { how(??); } esle{ whatToDo(??); } ?> If there is no way, than maybe you could point to a easiest way of load balancing solution.... I will be extremally thanksfull if you can tell me for such purposes , should I move lets say to PostgreSQl or FireBird? Which of them will be more easy in the future to handle ? I am getting on the mysite.com/users/show/$userId page something like 60queries for all data... maybe too much, but anyway....after some optimization it can be 20-30....

    Read the article

  • Procedure or function AppendDataCT has too many arguments specified

    - by salvationishere
    I am developing a C# VS 2008 / SQL Server website application. I am a newbie to ASP.NET. I am getting the above compiler error. Can you give me advice on how to fix this? Code snippet: public static string AppendDataCT(DataTable dt, Dictionary<int, string> dic) { string connString = ConfigurationManager.ConnectionStrings["AW3_string"].ConnectionString; string errorMsg; try { SqlConnection conn2 = new SqlConnection(connString); SqlCommand cmd = conn2.CreateCommand(); cmd.CommandText = "dbo.AppendDataCT"; cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = conn2; SqlParameter p1, p2, p3; foreach (string s in dt.Rows[1].ItemArray) { DataRow dr = dt.Rows[1]; // second row p1 = cmd.Parameters.AddWithValue((string)dic[0], (string)dr[0]); p1.SqlDbType = SqlDbType.VarChar; p2 = cmd.Parameters.AddWithValue((string)dic[1], (string)dr[1]); p2.SqlDbType = SqlDbType.VarChar; p3 = cmd.Parameters.AddWithValue((string)dic[2], (string)dr[2]); p3.SqlDbType = SqlDbType.VarChar; } conn2.Open(); cmd.ExecuteNonQuery(); It errors on this last line here. And here is that SP: ALTER PROCEDURE [dbo].[AppendDataCT] @col1 VARCHAR(50), @col2 VARCHAR(50), @col3 VARCHAR(50) AS BEGIN SET NOCOUNT ON; DECLARE @TEMP DATETIME SET @TEMP = (SELECT CONVERT (DATETIME, @col3)) INSERT INTO Person.ContactType (Name, ModifiedDate) VALUES( @col2, @TEMP) END

    Read the article

  • Most efficient way to check for DBNull and then assign to a variable?

    - by ilitirit
    This question comes up occasionally but I haven't seen a satisfactory answer. A typical pattern is (row is a DataRow): if (row["value"] != DBNull.Value) { someObject.Member = row["value"]; } My first question is which is more efficient (I've flipped the condition): row["value"] == DBNull.Value; // Or row["value"] is DBNull; // Or row["value"].GetType() == typeof(DBNull) // Or... any suggestions? This indicates that .GetType() should be faster, but maybe the compiler knows a few tricks I don't? Second question, is it worth caching the value of row["value"] or does the compiler optimize the indexer away anyway? eg. object valueHolder; if (DBNull.Value == (valueHolder = row["value"])) {} Disclaimers: row["value"] exists. I don't know the column index of the column (hence the column name lookup) I'm asking specifically about checking for DBNull and then assignment (not about premature optimization etc). Edit: I benchmarked a few scenarios (time in seconds, 10000000 trials): row["value"] == DBNull.Value: 00:00:01.5478995 row["value"] is DBNull: 00:00:01.6306578 row["value"].GetType() == typeof(DBNull): 00:00:02.0138757 Object.ReferenceEquals has the same performance as "==" The most interesting result? If you mismatch the name of the column by case (eg. "Value" instead of "value", it takes roughly ten times longer (for a string): row["Value"] == DBNull.Value: 00:00:12.2792374 The moral of the story seems to be that if you can't look up a column by it's index, then ensure that the column name you feed to the indexer matches the DataColumn's name exactly. Caching the value also appears to be nearly twice as fast: No Caching: 00:00:03.0996622 With Caching: 00:00:01.5659920 So the most efficient method seems to be: object temp; string variable; if (DBNull.Value != (temp = row["value"]) { variable = temp.ToString(); } This was a good learning experience.

    Read the article

  • How to display rich text in tooltip ASP.Net ?

    - by mokokamello
    Experts ! i use the following code to display a tooltip <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="ID" DataSourceID="AccessDataSource1"> <Columns> <asp:CommandField ShowEditButton="True" /> <asp:BoundField DataField="ID" HeaderText="ID" InsertVisible="False" ReadOnly="True" SortExpression="ID" /> <asp:BoundField DataField="datefu" HeaderText="date" SortExpression="datefu" /> <asp:TemplateField HeaderText="title" SortExpression="titlefu"> <EditItemTemplate> <asp:TextBox ID="TextBox1" runat="server" Text='<%# Bind("titlefu") %>'></asp:TextBox> </EditItemTemplate> <ItemTemplate> <a href="#" title="<asp:Literal ID="Label1" runat="server" Text='<%# Eval("fu") %>'/>"/> <asp:Label ID="NamePatientLabel" runat="server" Text='<%# Eval("titlefu") %>' /> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> display the following result however when i edit the text as follows (making it bold and red in another gridview containing rich text editor) i get the following (as a formatting result in the second grid view) however when i view in the first gridview to display the tooltip i get the following reult i really need your help to display the tooltip as rich text

    Read the article

  • Memory Bandwidth Performance for Modern Machines

    - by porgarmingduod
    I'm designing a real-time system that occasionally has to duplicate a large amount of memory. The memory consists of non-tiny regions, so I expect the copying performance will be fairly close to the maximum bandwidth the relevant components (CPU, RAM, MB) can do. This led me to wonder what kind of raw memory bandwidth modern commodity machine can muster? My aging Core2Duo gives me 1.5 GB/s if I use 1 thread to memcpy() (and understandably less if I memcpy() with both cores simultaneously.) While 1.5 GB is a fair amount of data, the real-time application I'm working on will have have something like 1/50th of a second, which means 30 MB. Basically, almost nothing. And perhaps worst of all, as I add multiple cores, I can process a lot more data without any increased performance for the needed duplication step. But a low-end Core2Due isn't exactly hot stuff these days. Are there any sites with information, such as actual benchmarks, on raw memory bandwidth on current and near-future hardware? Furthermore, for duplicating large amounts of data in memory, are there any shortcuts, or is memcpy() as good as it will get? Given a bunch of cores with nothing to do but duplicate as much memory as possible in a short amount of time, what's the best I can do?

    Read the article

  • Shouldn't prepared statements be much faster?

    - by silversky
    $s = explode (" ", microtime()); $s = $s[0]+$s[1]; $con = mysqli_connect ('localhost', 'test', 'pass', 'db') or die('Err'); for ($i=0; $i<1000; $i++) { $stmt = $con -> prepare( " SELECT MAX(id) AS max_id , MIN(id) AS min_id FROM tb "); $stmt -> execute(); $stmt->bind_result($M,$m); $stmt->free_result(); $rand = mt_rand( $m , $M ).'<br/>'; $res = $con -> prepare( " SELECT * FROM tb WHERE id >= ? LIMIT 0,1 "); $res -> bind_param("s", $rand); $res -> execute(); $res->free_result(); } $e = explode (" ", microtime()); $e = $e[0]+$e[1]; echo number_format($e-$s, 4, '.', ''); // and: $link = mysql_connect ("localhost", "test", "pass") or die (); mysql_select_db ("db") or die ("Unable to select database".mysql_error()); for ($i=0; $i<1000; $i++) { $range_result = mysql_query( " SELECT MAX(`id`) AS max_id , MIN(`id`) AS min_id FROM tb "); $range_row = mysql_fetch_object( $range_result ); $random = mt_rand( $range_row->min_id , $range_row->max_id ); $result = mysql_query( " SELECT * FROM tb WHERE id >= $random LIMIT 0,1 "); } defenitly prepared statements are much more safer but also every where it says that they are much faster BUT in my test on the above code I have: - 2.45 sec for prepared statements - 5.05 sec for the secon example What do you think I'm doing wrong? Should I use the second solution or I should try to optimize the prep stmt?

    Read the article

  • OO Design / Patterns - Fat Model Vs Transaction Script?

    - by ben
    Ok, 'Fat' Model and Transaction Script both solve design problems associated with where to keep business logic. I've done some research and popular thought says having all business logic encapsulated within the model is the way to go (mainly since Transaction Script can become really complex and often results in code duplication). However, how does this work if I want to use the TDG of a second Model in my business logic? Surely Transaction Script presents a neater, less coupled solution than using one Model inside the business logic of another? A practical example... I have two classes: User & Alert. When pushing User instances to the database (eg, creating new user accounts), there is a business rule that requires inserting some default Alerts records too (eg, a default 'welcome to the system' message etc). I see two options here: 1) Add this rule as a User method, and in the process create a dependency between User and Alert (or, at least, Alert's Table Data Gateway). 2) Use a Transaction Script, which avoids the dependency between models. (Also, means the business logic is kept in a 'neutral' class & easily accessible by Alert. That probably isn't too important here, though). User takes responsibility for it's own validation etc, however, but because we're talking about a business rule involving two Models, Transaction Script seems like a better choice to me. Anyone spot flaws with this approach?

    Read the article

  • Detecting what changed in an HTML Textfield

    - by teehoo
    For a major school project I am implementing a real-time collaborative editor. For a little background, basically what this means is that two(or more) users can type into a document at the same time, and their changes are automatically propagated to one another (similar to Etherpad). Now my problem is as follows: I want to be able to detect what changes a user carried out onto an HTML textfield. They could: Insert a character Delete a character Paste a string of characters Cut a string of characters I want to be able to detect which of these changes happened and then notify other clients similar to "insert character 'c' at position 2" etc. Anyway I was hoping to get some advice on how I would go about implementing the detection of these changes? My first attempt was to consider the carot position before and after a change occurred, but this failed miserably. For my second attempt I was thinking about doing a diff on the entire contents of the textfields old and new value. Am I missing anything obvious with this solution? Is there something simpler?

    Read the article

  • WMI Query Script as a Job

    - by Kenneth
    I have two scripts. One calls the other with a list of servers as parameters. The second query is designed to execute a WMI query. When I run it manually, it does this perfectly. When I try to run it as a job it hangs forever and I have to remove it. For the sake of space here is the relevant part of the calling script: ProcessServers.ps1 Start-Job -FilePath .\GetServerDetailsLight.ps1 -ArgumentList $sqlsrv,$destdb,$server,$instance GetServerDetailsLight.ps1 param($sqlsrv,$destdb,$server,$instance) $password = get-content C:\SQLPS\auth.txt | convertto-securestring $credentials = new-object -typename System.Management.Automation.PSCredential -argumentlist "DOMAIN\MYUSER",$password [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') $box_id = 0; if ($sqlsrv.length -eq 0) { write-output "No data passed" break } function getinfo { param( [string]$svr, [string]$inst ) "Entered GetInfo with: $svr,$inst" $cs = get-wmiobject win32_operatingsystem -computername $svr -credential $credentials -authentication 6 -Verbose -Debug | select Name, Model, Manufacturer, Description, DNSHostName, Domain, DomainRole, PartOfDomain, NumberOfProcessors, SystemType, TotalPhysicalMemory, UserName, Workgroup write-output "WMI Results: $cs" } getinfo $server $instance write-output "Complete" Executed as a job it will show as 'running' forever: PS C:\sqlps> Start-Job -FilePath .\GetServerDetailsLight.ps1 -ArgumentList DBSERVER,LOGDB,SERVER01,SERVER01 Id Name State HasMoreData Location Command -- ---- ----- ----------- -------- ------- 21 Job21 Running True localhost param($sqlsrv,$destdb,... GAC Version Location --- ------- -------- True v2.0.50727 C:\WINDOWS\assembly\GAC_MSIL\Microsoft.SqlServer.Smo\10.0.0.0__89845dcd8080cc91\Microsoft.SqlServer.Smo.dll getinfo MSDCHR01 MSDCHR01 Entered GetInfo with: SERVER01,SERVER01 The last output I ever get is the 'Entered GetInfo with: SERVER01,SERVER01'. If I run it manually like so: PS C:\sqlps> .\GetServerDetailsLight.ps1 DBSERVER LOGDB SERVER01 SERVER01 The WMI query executes just as expected. I am trying to determine why this is, or at least a useful way to trap errors from within jobs. Thanks!

    Read the article

  • PHP: If no Results - Split the Searchrequest and Try to find Parts of the Search

    - by elmaso
    Hello, i want to split the searchrequest into parts, if there's nothing to find. example: "nelly furtado ft. jimmy jones" - no results - try to find with nelly, furtado, jimmy or jones.. i have an api url.. thats the difficult part.. i show you some of the actually snippets: $query = urlencode (strip_tags ($_GET[search])); and $found = '0'; if ($source == 'all') { if (!($res = @get_url ('http://api.example.com/?key=' . $API . '&phrase=' . $query . ' . '&sort=' . $sort))) { exit ('<error>Cannot get requested information.</error>'); ; } how can i put a else request in this snippet, like if nothing found take the first word, or the second word, is this possible? or maybe you can tell me were i can read stuff about this function? thank you!!

    Read the article

  • jQuery Tools alert works once (but only once)

    - by Jim Miller
    I'm trying to build a simple alert mechanism with jQuery Tools -- in response to a bit of Javascript code, pop up an overlay with a message and an OK button that, when clicked, makes the overlay go away. Trivial, or it should be. I've been slavishly following http://flowplayer.org/tools/demos/overlay/trigger.html, and have something that works fine the first time it's invoked, but only that time. If I repeat the JS action that should expose the overlay, it doesn't. My content/DIV: <div class='modal' id='the_alert'> <div id='modal_content' class='modal_content'> <h2>hi there</h2> this is the body <p> <button class='close'>OK</button> </p> </div> <div id='modal_background' class='modal_background'><img src='/images/overlay/f9f9f9-180.png' class='stretch' alt='' /></div> </div> and the Javascript: function showOverlayDialog() { $('#the_alert').overlay({ mask: {color: '#cccccc', loadSpeed: 200, opacity: 0.9}, closeOnClick: false, load: true }); } As I said: When showOverlayDialog() is invoked the first time, the overlay appears just like it should, and goes away when the "OK" button is clicked. But if I cause showOverlayDialog() to run again, without reloading the page, nothing happens. If I reload the page, then the pattern repeats -- the first invocation brings up the overlay, but the second one doesn't. I'm obviously missing something -- any advice out there? Thanks!

    Read the article

  • How can I reject a Windows "Service Stop" request in ATL 7?

    - by Matt Dillard
    I have a Windows service built upon ATL 7's CAtlServiceModuleT class. This service serves up COM objects that are used by various applications on the system, and these other applications naturally start getting errors if the service is stopped while they are still running. I know that ATL DLLs solve this problem by returning S_OK in DllCanUnloadNow() if CComModule's GetLockCount() returns 0. That is, it checks to make sure no one is currently using any COM objects served up by the DLL. I want equivalent functionality in the service. Here is what I've done in my override of CAtlServiceModuleT::OnStop(): void CMyServiceModule::OnStop() { if( GetLockCount() != 0 ) { return; } BaseClass::OnStop(); } Now, when the user attempts to Stop the service from the Services panel, they are presented with an error message: Windows could not stop the XYZ service on Local Computer. The service did not return an error. This could be an internal Windows error or an internal service error. If the problem persists, contact your system administrator. The Stop request is indeed refused, but it appears to put the service in a bad state. A second Stop request results in this error message: Windows could not stop the XYZ service on Local Computer. Error 1061: The service cannot accept control messages at this time. Interestingly, the service does actually stop this time (although I'd rather it not, since there are still outstanding COM references). I have two questions: Is it considered bad practice for a service to refuse to stop when asked? Is there a polite way to signify that the Stop request is being refused; one that doesn't put the Service into a bad state?

    Read the article

  • xCode: iPhone Swipe Gesture crash

    - by David DelMonte
    I have an app that I'd like the swipe gesture to flip to a second view. The app is all set up with buttons that work. The swipe gesture though causes a crash ( “EXC_BAD_ACCESS”.). The gesture code is: - (void)handleSwipe:(UISwipeGestureRecognizer *)recognizer { NSLog(@"%s", __FUNCTION__); switch (recognizer.direction) { case (UISwipeGestureRecognizerDirectionRight): [self performSelector:@selector(flipper:)]; break; case (UISwipeGestureRecognizerDirectionLeft): [self performSelector:@selector(flipper:)]; break; default: break; } } and "flipper" looks like this: - (IBAction)flipper:(id)sender { FlashCardsAppDelegate *mainDelegate = (FlashCardsAppDelegate *)[[UIApplication sharedApplication] delegate]; [mainDelegate flipToFront]; } flipToBack (and flipToFront) look like this.. - (void)flipToBack { NSLog(@"%s", __FUNCTION__); BackViewController *theBackView = [[BackViewController alloc] initWithNibName:@"BackView" bundle:nil]; [self setBackViewController:theBackView]; [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:1.0]; [UIView setAnimationTransition:UIViewAnimationTransitionFlipFromLeft forView:window cache:YES]; [frontViewController.view removeFromSuperview]; [self.window addSubview:[backViewController view]]; [UIView commitAnimations]; [frontViewController release]; frontViewController = nil; [theBackView release]; // NSLog (@" FINISHED "); } Maybe I'm going about this the wrong way... All ideas are welcome...

    Read the article

  • R: disentangling scopes

    - by rescdsk
    Hi, Right now, in my R project, I have functions1.R with doFoo() and doBar(), functions2.R with other functions, and main.R with the main program in it, which first does source('functions1.R'); source('functions2.R'), and then calls the other functions. I've been starting the program from the R GUI in Mac OS X, with source('main.R'). This is fine the first time, but after that, the variables that were defined the first time through the program are defined for the second time functions*.R are sourced, and so the functions get a whole bunch of extra variables defined. I don't want that! I want an "undefined variable" error when my function uses a variable it shouldn't! Twice this has given me very late nights of debugging! So how do other people deal with this sort of problem? Is there something like source(), but that makes an independent namespace that doesn't fall through to the main one? Making a package seems like one solution, but it seems like a big pain in the butt compared to e.g. Python, where a source file is automatically a separate namespace. Any tips? Thank you!

    Read the article

  • Why is execution-time method resolution faster than compile-time resolution?

    - by Felix
    At school, we about virtual functions in C++, and how they are resolved (or found, or matched, I don't know what the terminology is -- we're not studying in English) at execution time instead of compile time. The teacher also told us that compile-time resolution is much faster than execution-time (and it would make sense for it to be so). However, a quick experiment would suggest otherwise. I've built this small program: #include <iostream> #include <limits.h> using namespace std; class A { public: void f() { // do nothing } }; class B: public A { public: void f() { // do nothing } }; int main() { unsigned int i; A *a = new B; for (i=0; i < UINT_MAX; i++) a->f(); return 0; } Where I made A::f() once normal, once virtual. Here are my results: [felix@the-machine C]$ time ./normal real 0m25.834s user 0m25.742s sys 0m0.000s [felix@the-machine C]$ time ./virtual real 0m24.630s user 0m24.472s sys 0m0.003s [felix@the-machine C]$ time ./normal real 0m25.860s user 0m25.735s sys 0m0.007s [felix@the-machine C]$ time ./virtual real 0m24.514s user 0m24.475s sys 0m0.000s [felix@the-machine C]$ time ./normal real 0m26.022s user 0m25.795s sys 0m0.013s [felix@the-machine C]$ time ./virtual real 0m24.503s user 0m24.468s sys 0m0.000s There seems to be a steady ~1 second difference in favor of the virtual version. Why is this? Relevant or not: dual-core pentium @ 2.80Ghz, no extra applications running between two tests. Archlinux with gcc 4.5.0. Compiling normally, like: $ g++ test.cpp -o normal Also, -Wall doesn't spit out any warnings, either.

    Read the article

  • what's an effective way to build a csproj file in code?

    - by jcollum
    I'd like to avoid a command line for this. I've been using the MSBuild API ( Microsoft.Build.Framework and Microsoft.Build.BuildEngine) with code that looks like this: this.buildEngine = new Engine(); BuildPropertyGroup props = new BuildPropertyGroup(); props.SetProperty("Configuration", "Debug"); this.buildEngine.RegisterLogger(this.logger); Project proj = new Project(this.buildEngine); proj.LoadXml(this.projectFileAndPath, ProjectLoadSettings.None); this.buildEngine.BuildProject(proj, "Build"); However I've run into enough problems that I can't find answers for that I'm really wondering if I'm doing this right. First, I can't find the output (there's no bin directory in any of the places where I figured the dll's would end up). Second, I tried building a project that I had made in VS2008 and the line proj.LoadXml( fails for invalid xml encoding. But of course the xml file is valid, since VS2008 can build it (I checked). At this point I'm beginning to wonder if I've picked up some code that's way out of date or a methodology that's been superseded by something else. Opinions?

    Read the article

  • Efficient way of calculating average difference of array elements from array average value

    - by Saysmaster
    Is there a way to calculate the average distance of array elements from array average value, by only "visiting" each array element once? (I search for an algorithm) Example: Array : [ 1 , 5 , 4 , 9 , 6 ] Average : ( 1 + 5 + 4 + 9 + 6 ) / 5 = 5 Distance Array : [|1-5|, |5-5|, |4-5|, |9-5|, |6-5|] = [4 , 0 , 1 , 4 , 1 ] Average Distance : ( 4 + 0 + 1 + 4 + 1 ) / 5 = 2 The simple algorithm needs 2 passes. 1st pass) Reads and accumulates values, then divides the result by array length to calculate average value of array elements. 2nd pass) Reads values, accumulates each one's distance from the previously calculated average value, and then divides the result by array length to find the average distance of the elements from the average value of the array. The two passes are identical. It is the classic algorithm of calculating the average of a set of values. The first one takes as input the elements of the array, the second one the distances of each element from the array's average value. Calculating the average can be modified to not accumulate the values, but caclulating the average "on the fly" as we sequentialy read the elements from the array. The formula is: Compute Running Average of Array's elements ------------------------------------------- RA[i] = E[i] {for i == 1} RA[i] = RA[i-1] - RA[i-1]/i + A[i]/i { for i > 1 } Where A[x] is the array's element at position x, RA[x] is the average of the array's elements between position 1 and x (running average). My question is: Is there a similar algorithm, to calculate "on the fly" (as we read the array's elements), the average distance of the elements from the array's mean value? The problem is that, as we read the array's elements, the final average value of the array is not known. Only the running average is known. So calculating differences from the running average will not yield the correct result. I suppose, if such algorithm exists, it probably should have the "ability" to compensate, in a way, on each new element read for the error calculated as far.

    Read the article

  • How to handle pagination queries properly with mongodb and php?

    - by luckytaxi
    Am I doing this right? I went to look at some old PHP code w/ MySQL and I've managed to get it to work, however I'm wondering if there's a much "cleaner" and "faster" way of accomplishing this. First I would need to get the total number of "documents" $total_documents = $collection->find(array("tags" => $tag, "seeking" => $this->session->userdata('gender'), "gender" => $this->session->userdata('seeking')))->count(); $skip = (int)($docs_per_page * ($page - 1)); $limit = $docs_per_page; $total_pages = ceil($total_documents / $limit); // Query to populate array so I can display with pagination $data['result'] = $collection->find(array("tags" => $tag, "seeking" => $this->session->userdata('gender'), "gender" => $this->session->userdata('seeking')))->limit($limit)->skip($skip)->sort(array("_id" => -1)); My question is, can I run the query in one shot? I'm basically running the same query twice, except the second time I'm passing the value to skip between records.

    Read the article

< Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >