Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 779/883 | < Previous Page | 775 776 777 778 779 780 781 782 783 784 785 786  | Next Page >

  • Round date to 10 minutes interval

    - by Peter Lang
    I have a DATE column that I want to round to the next-lower 10 minute interval in a query (see example below). I managed to do it by truncating the seconds and then subtracting the last digit of minutes. WITH test_data AS ( SELECT TO_DATE('2010-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:05:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:09:59', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:10:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2099-01-01 10:00:33', 'YYYY-MM-DD HH24:MI:SS') d FROM dual ) -- #end of test-data SELECT d, TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data And here is the result: 01.01.2010 10:00:00    01.01.2010 10:00:00 01.01.2010 10:05:00    01.01.2010 10:00:00 01.01.2010 10:09:59    01.01.2010 10:00:00 01.01.2010 10:10:00    01.01.2010 10:10:00 01.01.2099 10:00:33    01.01.2099 10:00:00 Works as expected, but is there a better way? EDIT: I was curious about performance, so I did the following test with 500.000 rows and (not really) random dates. I am going to add the results as comments to the provided solutions. DECLARE t TIMESTAMP := SYSTIMESTAMP; BEGIN FOR i IN ( WITH test_data AS ( SELECT SYSDATE + ROWNUM / 5000 d FROM dual CONNECT BY ROWNUM <= 500000 ) SELECT TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data ) LOOP NULL; END LOOP; dbms_output.put_line( SYSTIMESTAMP - t ); END; This approach took 03.24 s.

    Read the article

  • Setting up a web development/build environment

    - by Eric
    Hello all, My current project has a development web server and live web server. Developers make changes to files on the dev server and test them (by going to the dev address) and make changes as necessary. When the file or files are ready to go, they are copied to the live server. There is no version control. As you might expect, there are some problems with this model: It's hard to keep track of what other programmers have done. It's hard to keep track of what files should be copied to the live server. There is no version control. I'm in a position to make nearly any change I like, but I want it to be the right one! I have been turning this over in my head for quite a while, and I have a solution that might be okay. But I want SO's opinion. Certainly version control needs to be added. But how should it work with the existing codebase and where should the developers be testing? How can anyone know what needs to be moved to the live server? What other details need to be addressed? How would you attack this problem? Supplementary information: The website is vital, but not mission critical. A small amount of downtime is acceptable. There are very few developers. (Right now, only 4.) History: Before I started, the project used Visual Source Safe. This was a sufficiently bad experience that they quit using it and abandoned version control. The project is an ASP.NET (C#) website. This seems like a question that may have a complicated answer. Thanks for thinking about it!

    Read the article

  • Is there bug in the Matrix.RotateAt method for certain angles? .Net Winforms

    - by Jules
    Here's the code i'm using to rotate: Dim m As New System.Drawing.Drawing2D.Matrix Dim size = image.Size m.RotateAt(degreeAngle, New PointF(CSng(size.Width / 2), CSng(size.Height / 2))) Dim temp As New Bitmap(600, 600, Imaging.PixelFormat.Format32bppPArgb) Dim g As Graphics = Graphics.FromImage(temp) g.Transform = m g.DrawImage(image, 0, 0) (1) Disposals removed for brevity. (2) I test the code with a 200 x 200 rectangle. (3) Size 600,600 it just an arbitrary large value that I know will fit the right and bottom sides of the rotated image for testing purposes. (4) I know, with this code, the top and left edges will be clipped because I'm not transforming the orgin after the rotate. The problem only occurs at certain angles: (1) At 90, the right hand edge disappears completely. (2) At 180, the right and bottom edges are there, but very faded. (3) At 270, the bottom edge disappears completely. Is this a known bug? If I manually rotate the corners an draw the image by specifying an output rectangle, I don't get the same problem - though it is slightly slower than using RotateAt.

    Read the article

  • do the Python libraries have a natural dependence on the global namespace?

    - by msw
    I first ran into this when trying to determine the relative performance of two generators: t = timeit.repeat('g.get()', setup='g = my_generator()') So I dug into the timeit module and found that the setup and statement are evaluated with their own private, initially empty namespaces so naturally the binding of g never becomes accessible to the g.get() statement. The obvious solution is to wrap them into a class, thus adding to the global namespace. I bumped into this again when attempting, in another project, to use the multiprocessing module to divide a task among workers. I even bundled everything nicely into a class but unfortunately the call pool.apply_async(runmc, arg) fails with a PicklingError because buried inside the work object that runmc instantiates is (effectively) an assignment: self.predicate = lambda x, y: x > y so the whole object can't be (understandably) pickled and whereas: def foo(x, y): return x > y pickle.dumps(foo) is fine, the sequence bar = lambda x, y: x > y yields True from callable(bar) and from type(bar), but it Can't pickle <function <lambda> at 0xb759b764>: it's not found as __main__.<lambda>. I've given only code fragments because I can easily fix these cases by merely pulling them out into module or object level defs. The bug here appears to be in my understanding of the semantics of namespace use in general. If the nature of the language requires that I create more def statements I'll happily do so; I fear that I'm missing an essential concept though. Why is there such a strong reliance on the global namespace? Or, what am I failing to understand? Namespaces are one honking great idea -- let's do more of those!

    Read the article

  • Bash PATH: How long is too long?

    - by ajwood
    Hi, I'm currently designing a software quarantine pattern to use on Ubuntu. I'm not sure how standard "quarantine" is in this context, so here is what I hope to accomplish... Inside a particular quarantine is all of the stuff one needs to run an application (bin, share, lib, etc.). Ideally, the quarantine has no leaks, which means it's not relying on any code outside of itself on the system. A quarantine can be defined as a set of executables (and some environment settings needed to make them run). I think it will be beneficial to separate the built packages enough such that upgrading to a newer version of the quarantine won't require rebuilding the whole thing. I'll be able to update just a few packages, and then the new quarantine can use some of old parts and some of the new parts. One issue I'm wondering about is the environment variables I'll be setting up to use a particular quarantines. Is there a hard limit on how big PATH can be? (either in number of characters, or in the number of directories it contains) Might a path be so long that it affects performance? Thanks very much, Andrew p.s. Any other wisdom that might help my design would be greatly appreciated :)

    Read the article

  • Android: Problems downloading images and converting to bitmaps

    - by Mike
    Hi all, I am working on an application that downloads images from a url. The problem is that only some images are being correctly downloaded and others are not. First off, here is the problem code: public Bitmap downloadImage(String url) { HttpClient client = new DefaultHttpClient(); HttpResponse response = null; try { response = client.execute(new HttpGet(url)); } catch (ClientProtocolException cpe) { Log.i(LOG_FILE, "client protocol exception"); return null; } catch (IOException ioe) { Log.i(LOG_FILE, "IOE downloading image"); return null; } catch (Exception e) { Log.i(LOG_FILE, "Other exception downloading image"); return null; } // Convert images from stream to bitmap object try { Bitmap image = BitmapFactory.decodeStream(response.getEntity().getContent()); if(image==null) Log.i(LOG_FILE, "image conversion failed"); return image; } catch (Exception e) { Log.i(LOG_FILE, "Other exception while converting image"); return null; } } So what I have is a method that takes the url as a string argument and then downloads the image, converts the HttpResponse stream to a bitmap by means of the BitmapFactory.decodeStream method, and returns it. The problem is that when I am on a slow network connection (almost always 3G rather than Wi-Fi) some images are converted to null--not all of them, only some of them. Using a Wi-Fi connection works perfectly; all the images are downloaded and converted properly. Does anyone know why this is happening? Or better, how can I fix this? How would I even go about testing to determine the problem? Any help is awesome; thank you!

    Read the article

  • How much is too much memory allocation in NDK?

    - by Maximus
    The NDK download page notes that, "Typical good candidates for the NDK are self-contained, CPU-intensive operations that don't allocate much memory, such as signal processing, physics simulation, and so on." I came from a C background and was excited to try to use the NDK to operate most of my OpenGL ES functions and any native functions related to physics, animation of vertices, etc... I'm finding that I'm relying quite a bit on Native code and wondering if I may be making some mistakes. I've had no trouble with testing at this point, but I'm curious if I may run into problems in the future. For example, I have game struct defined (somewhat like is seen in the San-Angeles example). I'm loading vertex information for objects dynamically (just what is needed for an active game area) so there's quite a bit of memory allocation happening for vertices, normals, texture coordinates, indices and texture graphic data... just to name the essentials. I'm quite careful about freeing what is allocated between game areas. Would I be safer setting some caps on array sizes or should I charge bravely forward as I'm going now?

    Read the article

  • Windows Forms application C# | Sending Tweet

    - by Andrew Craswell
    I've been able to get my Twitter web app working just fine, but I've recently decided to add Tweeting functionality from a Windows Form, but I'm having no luck sending tweets. No errors are thrown or anything. Mind looking over my code? using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using TweetSharp; namespace TwitterAdvirtiser { public partial class Form1 : Form { private string cKey = "xxx"; private string cSecret = "xxx"; private string oToken = "xxx"; private string aToken = "xxx"; public Form1() { InitializeComponent(); OnStart(); } private void OnStart() { //Authenticate with Twitter TwitterService service = new TwitterService(cKey, cSecret); service.AuthenticateWith(oToken, aToken); service.SendTweet("testing"); } } } It seems like I'm authenticating just fine, I can walk through debug mode and see all my details in the TwitterUser structure, and yet my tweets never show up on my feed. Whats up? By the way, I'm using Visual Studios 2010, and .NET 4.0. I have verified that the oToken and aToken strings have my developer tokens.

    Read the article

  • OOP design issue: Polymorphism

    - by Graham Phillips
    I'm trying to solve a design issue using inheritance based polymorphism and dynamic binding. I have an abstract superclass and two subclasses. The superclass contains common behaviour. SubClassA and SubClassB define some different methods: SubClassA defines a method performTransform(), but SubClassB does not. So the following example 1 var v:SuperClass; 2 var b:SubClassB = new SubClassB(); 3 v = b; 4 v.performTransform(); would cause a compile error on line 4 as performTransform() is not defined in the superclass. We can get it to compile by casting... (v as SubClassA).performTransform(); however, this will cause a runtime exception to be thrown as v is actually an instance of SubClassB, which also does not define performTransform() So we can get around that by testing the type of an object before casting it: if( typeof v == SubClassA) { (cast v to SubClassA).performTransform(); } That will ensure that we only call performTransform() on v's that are instances of SubClassA. That's a pretty inelegant solution to my eyes, but at least its safe. I have used interface based polymorphism (interface meaning a type that can't be instantiated and defines the API of classes that implement it) in the past, but that also feels clunky. For the above case, if SubClassA and SubClassB implemented ISuperClass that defined performTransform, then they would both have to implement performTransform(). If SubClassB had no real need for a performTransform() you would have to implement an empty function. There must be a design pattern out there that addresses the issue.

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

  • Flex Tree with infinite parents and children

    - by Tempname
    I am working on a tree component and I am having a bit of the issue with populating the data-provider for this tree. The data that I get back from my database is a simple array of value objects. Each value object has 2 properties. ObjectID and ParentID. For parents the ParentID is null and for children the ParentID is the ObjectID of the parent. Any help with this is greatly appreciated. Essentially the tree should look something like this: Parent1 Child1 Child1 Child2 Child1 Child2 Parent2 Child1 Child2 Child3 Child1 This is the current code that I am testing with: public function setDataProvider(data:Array):void { var tree:Array = new Array(); for(var i:Number = 0; i < data.length; i++) { // do the top level array if(!data[i].parentID) { tree.push(data[i], getChildren(data[i].objectID, data)); } } function getChildren(objectID:Number, data:Array):Array { var childArr:Array = new Array(); for(var k:Number = 0; k < data.length; k++) { if(data[k].parentID == objectID) { childArr.push(data[k]); //getChildren(data[k].objectID, data); } } return childArr; } trace(ObjectUtil.toString(tree)); } Here is a cross section of my data: ObjectID ParentID 1 NULL 10 NULL 8 NULL 6 NULL 4 6 3 6 9 6 2 6 11 7 7 8 5 8

    Read the article

  • How Best to Replace PL/SQL with C#?

    - by Mike
    Hi, I write a lot of one-off Oracle SQL queries/reports (in Toad), and sometimes they can get complex, involving lots of unions, joins, and subqueries, and/or requiring dynamic SQL, and/or procedural logic. PL/SQL is custom made for handling these situations, but as a language it does not compare to C#, and even if it did, it's tooling does not, and if even that did, forcing yet another language on the team is something to be avoided whenever possible. Experience has shown me that using SQL for the set based processing, coupled with C# for the procedural processing, is a powerful combination indeed, and far more readable, maintainable and enhanceable than PL/SQL. So, we end up with a number of small C# programs which typically would construct a SQL query string piece by piece and/or run several queries and process them as needed. This kind of code could easily be a disaster, but a good developer can make this work out quite well, and end up with very readable code. So, I don't think it's a bad way to code for smaller DB focused projects. My main question is, how best to create and package all these little C# programs that run ad hoc queries and reports against the database? Right now I create little report objects in a DLL, developed and tested with NUnit, but then I continue to use NUnit as the GUI to select and run them. NUnit just happens to provide a nice GUI for this kind of thing, even after testing has been completed. I'm also interested in suggestions for reporting apps generally. I feel like there is a missing piece or product. The perfect tool would allow writing and running C# inside of Toad, or SQL inside of Visual Studio, along with simple reporting facilities. All ideas will be appreciated, but let's make the assumption that PL/SQL is not the solution.

    Read the article

  • Help regarding database and logic layer for my ASP.NET MVC application

    - by Ismail S
    I'm going to start a new project which is going to be small initially but may grow to big over the years. I'm strongly convinced that I'm going to use ASP.NET MVC with jQuery for UI. I want to go for MySQL as database for some reasons but worried on few things. I've a good years of experience working on SQL Server databases and on one project I've had a bad experience creating and managing stored procedures on MySQL database. I'm totally new to Linq but I see that it is easier to use once you are familiar with it. First thing is that accessing data should be easy. So I thought I should use MySQL to Linq but somewhere I read that it is not directly supported but MySQL .NET connector adds support for EntityFramework. I don't know what are the pros and cons of it. I would love if I can implement repository pattern. Will it be possible if I use Entity Framework? I'm not clear on how I should go about all this or I should just forget every thing and directly use SQL to Linq on SQL Server. I'm also concerned about the performance. Someone told me that if we use Entity framework it fetches lot of data and then filter it. Is that right?

    Read the article

  • Generating jquery 'rules' from business model to UI in asp.net mvc

    - by jim
    Hi all, I've had a good look around and am certain that there's no matching question on SO, so here goes. Has anyone created a 'helper' method on their model that generates jquery (or plain javascript) rules validation dynamically, based on the criteria/rules that are contained within the object and taken from a repository (i.e. DB). What i'm thinking of is a discrete set of partial views (and associated models) that have rules at the business logic 'level' and rather than (or in combination with) validating the rule(s) at postback, translating the same rules into tightly focussed jquery methods that work identically at client (js) and server (c#) levels. I can see benefits here re performance. Also, the rules definitions could be created in a single place (in c#) and the jquery generated off of that, thus allowing single edits to update both code streams. I appreciate that there would be limitations imposed by language specific contstraints but the general principle could be quite interesting if used appropriately. I'm also aware that testibility could be an issue when using two different language structures and hoping to achieve similar test outcomes - but those aside... any thoughts or experiences of similar out there?? cheers jimi

    Read the article

  • passing string to a AJAX/JSON function

    - by Mikey1980
    I’m having trouble getting a AJAX/JSON function to work correctly. I had this function grabbing value from a drop down box but now I want to use an anchor tag to set it's value. I thought it would be easy to just use the onClick event to pass string to the function I was using for the drop down box but it doesn’t do anything. I’m stumped! Here how I set it up: 1st I add an onClick event… <a href="<?php echo Settings::get('app.webroot'); ?>?view=schedule&action=questions" onmouseout="MM_swapImgRestore();" onmouseover="MM_swapImage('bre','','template/images/schedule/bre_f2.gif',1) onclick="assignCallType('testing')";> 2nd I check main.js.php function assignCallType(type) { alert(type); //just for debugging new Request.JSON({ url: "ajax.php", onSuccess: function(rtndata,txt){ if (rtndata['STATUS'] != 'OK') alert('Error assigning call type to call'); }, onFailure: function (xhr) { alert('Error assigning call type to call'); } }).get({ 'action': 'assignCallType', 'call_type': type }); } 3rd Ajax.php: the variable is back in PHP and values don’t get added to the db, but I also didn’t get the alert from main.js.php if ($_GET['action'] == "assignCallType") { if ($USER->isInsideSales()) { $call_type = $_GET['call_type']; $_SESSION['callinfo']->setCallType($call_type); $_SESSION['callinfo']->save($callid); echo json_encode(array('STATUS'=>'OK')); } else { echo json_encode(array('STATUS'=>'DENIED')); } } Any idea where I am going wrong. The only difference between this and the working drop down is how the function was called, I used onchange="assignCallType(this.value)".

    Read the article

  • Using Lucene to index private data, should I have a separate index for each user or a single index

    - by Nathan Bayles
    I am developing an Azure based website and I want to provide search capabilities using Lucene. (structured json objects would be indexed and stored in Lucene and other content such as Word documents, etc. would be indexed in lucene but stored in blob storage) I want the search to be secure, such that one user would never see a document belonging to another user. I want to allow ad-hoc searches as typed by the user. Lastly, I want to query programmatically to return predefined sets of data, such as "all notes for user X". I think I understand how to add properties to each document to achieve these 3 objectives. (I am listing them here so if anyone is kind enough to answer, they will have better idea of what I am trying to do) My questions revolve around performance and security. Can I improve document security by having a separate index for each user, or is including the user's ID as a parameter in each search sufficient? Can I improve indexing speed and total throughput of the system by having a separate index for each user? My thinking is that having separate indexes would allow me to scale the system by having multiple index writers (perhaps even on different server instances) working at the same time, each on their own index. Any insight would be greatly appreciated. Regards, Nate

    Read the article

  • PHP modifying and combining array

    - by Industrial
    Hi everyone, I have a bit of an array headache going on. The function does what I want, but since I am not yet to well acquainted with PHP:s array/looping functions, so thereby my question is if there's any part of this function that could be improved from a performance-wise perspective? I tried to be as complete as possible in my descriptions in each stage of the functions which shortly described prefixes all keys in an array, fill up eventual empty/non-valid keys with '' and removes the prefixes before returning the array: $var = myFunction ( array('key1', 'key2', 'key3', '111') ); function myFunction ($keys) { $prefix = 'prefix_'; $keyCount = count($keys); // Prefix each key and remove old keys for($i=0;$i<$keyCount; $i++){ $keys[] = $prefix.$keys[$i]; unset($keys[$i]); } // output: array('prefix_key1', 'prefix_key2', 'prefix_key3', '111) // Get all keys from memcached. Only returns valid keys $items = $this->memcache->get($keys); // output: array('prefix_key1' => 'value1', 'prefix_key2' => 'value2', 'prefix_key3'=>'value3) // note: key 111 was not found in memcache. // Fill upp eventual keys that are not valid/empty from memcache $return = $items + array_fill_keys($keys, ''); // output: array('prefix_key1' => 'value1', 'prefix_key2' => 'value2', 'prefix_key3'=>'value3, 'prefix_111' => '') // Remove the prefixes for each result before returning array to application foreach ($return as $k => $v) { $expl = explode($prefix, $k); $return[$expl[1]] = $v; unset($return[$k]); } // output: array('key1' => 'value1', 'key2' => 'value2', 'key3'=>'value3, '111' => '') return $return; } Thanks a lot!

    Read the article

  • Is locking on the requested object a bad idea?

    - by Quick Joe Smith
    Most advice on thread safety involves some variation of the following pattern: public class Thing { private static readonly object padlock = new object(); private string stuff, andNonsense; public string Stuff { get { lock (Thing.padlock) { if (this.stuff == null) this.stuff = "Threadsafe!"; } return this.stuff; } } public string AndNonsense { get { lock (Thing.padlock) { if (this.andNonsense == null) this.andNonsense = "Also threadsafe!"; } return this.andNonsense; } } // Rest of class... } In cases where the get operations are expensive and unrelated, a single locking object is unsuitable because a call to Stuff would block all calls to AndNonsense, degrading performance. And rather than create a lock object for each call, wouldn't it be better to acquire the lock on the member itself (assuming it is not something that implements SyncRoot or somesuch for that purpose? For example: public string Stuff { get { lock (this.stuff) { // Pretend that this is a very expensive operation. if (this.stuff == null) this.stuff = "Still threadsafe and good?"; } return this.stuff; } } Strangely, I have never seen this approach recommended or warned against. Am I missing something obvious?

    Read the article

  • java.awt.Robot.keyPress for continuous keystrokes

    - by Deb
    So, here's my problem. I have a java program which will send keystroke messages to a game (built in Unity), based on how the user interacts with an android phone. (My java program is a listener for the android interaction over wi-fi) Now, in order to do this, I am using java.awt.Robot to send keyPresses to the game window. I have the following code block written in my listener program: if(interacting) { Robot robot = new Robot(); robot.keyPress(VK_A); robot.delay(20); //to simulate the normal keyboard rate } Now the variable interacting will be true as long as the user presses down on the touch screen of the phone, and what I intend to achieve is a continuous chain of keystroke messages being delivered to the game (through the listener). However, this is severely affecting performance, for some reason. I am noticing that the game becomes slow (rapidly dropping frame rates), and even the computer becomes slow, in general. What's going wrong? Should I use a robot.keyRelease(VK_A) after each keyPress? But my game has a different action mapped to the release of a key, and I do not want rapid key presses and releases; what I really want is to simulate continuous keystrokes, in exactly the way it would behave if the user were pressing down the A key on their keyboard manually. Please help.

    Read the article

  • Database layout for an application with geocoding features using geokit

    - by vooD
    I'm developing a real estate web catalogue and want to geocode every ad using geokit gem. My question is what would be the best database layout from the performance point if i want to make search by country, city of the selected country, administrative area or nearest metro station of the selected city. Available countries, cities, administrative areas and metro sations should be defined by the administrator of catalogue and must be validated by geocoding. I came up with single table: create_table "geo_locations", :force => true do |t| t.integer "geo_location_id" #parent geo location (ex. country is parent geo location of city t.string "country", :null => false #necessary for any geo location t.string "city", #not null for city geo location and it's children t.string "administrative_area" #not null for administrative_area geo location and it's children t.string "thoroughfare_name" #not null for metro station or street name geo location and it's children t.string "premise_number" #house number t.float "lng", :null => false t.float "lat", :null => false t.float "bound_sw_lat", :null => false t.float "bound_sw_lng", :null => false t.float "bound_ne_lat", :null => false t.float "bound_ne_lng", :null => false t.integer "mappable_id" t.string "mappable_type" t.string "type" #country, city, administrative area, metro station or address end Final geo location is address it contains all neccessary information to put marker of the real estate ad on the map. But i'm still stuck on search functionality. Any help would be highly appreciated.

    Read the article

  • Sending emails with CodeIgniter in a local XAMPP server

    - by KeyStroke
    Hi, I'm trying to send emails through localhost (XAMPP Windows 1.7.3 installation), but I've been trying for hours with no success. This is the last code I tried: $config = Array( 'protocol' => 'smtp', 'smtp_host' => 'ssl://smtp.gmail.com', 'smtp_port' => 465, 'smtp_user' => '[email protected]', 'smtp_pass' => 'mypassword', ); $this->load->library('email', $config); $this->email->set_newline("\r\n"); $this->email->from('[email protected]', 'My Name'); $this->email->to('[email protected]'); $this->email->subject('Email Test'); $this->email->message('Testing the email class.'); if($this->email->send()) { echo 'Your email was sent.'; } else { show_error($this->email->print_debugger()); } Whenever I tried to load this the page shows that it's loading but nothing happens. Is there anything I need to setup in my server to insure email delivery? I messed with php.ini and sendmail's config a bit with no luck. And openSSL isn't available in case that matters. Any idea what's wrong?

    Read the article

  • Fastest way to clamp a real (fixed/floating point) value?

    - by Niklas
    Hi, Is there a more efficient way to clamp real numbers than using if statements or ternary operators? I want to do this both for doubles and for a 32-bit fixpoint implementation (16.16). I'm not asking for code that can handle both cases; they will be handled in separate functions. Obviously, I can do something like: double clampedA; double a = calculate(); clampedA = a > MY_MAX ? MY_MAX : a; clampedA = a < MY_MIN ? MY_MIN : a; or double a = calculate(); double clampedA = a; if(clampedA > MY_MAX) clampedA = MY_MAX; else if(clampedA < MY_MIN) clampedA = MY_MIN; The fixpoint version would use functions/macros for comparisons. This is done in a performance-critical part of the code, so I'm looking for an as efficient way to do it as possible (which I suspect would involve bit-manipulation) EDIT: It has to be standard/portable C, platform-specific functionality is not of any interest here. Also, MY_MIN and MY_MAX are the same type as the value I want clamped (doubles in the examples above).

    Read the article

  • Obfuscator for .NET assembly (Maybe just a C++ obfuscator?)

    - by Pirate for Profit
    The software company I work for is using a ton of open source LGPL/BSD/MIT C++ code that we have written wrappers around to port "helper classes" into a .NET assembly, via C++/CLI. These libraries have wrapped old cryptic APIs into easy-to-use ones based on common sense, and will be very helpful for a lot of different tasks will be included in many future client's applications, and we might even license it to other software companies in the same field. So naturally we are tasked with looking into solutions for securing the code from prying eyes. What we're trying to do is stop the casual observer from seeing what's going on. Now I have hacked some crazy shit in EverQuest and other video games in my day so I know with enough tireless effort anything can be done. But we don't want to make it easy for whomever. To the point, besides the Visual Studio compiler's optimizations, is there's a C++ obfuscator or .NET assembly obfuscator (after it's been built o.O) or something that would scramble everything up, re-arrange data structures, string constants, etc. idk? And if such a thing exists, we'd be curious to know how that would impact performance, as some sections of code are time critical (funny saying that using a managed M$ framework).

    Read the article

  • Which isolation level should I use for the following insert-if-not-present transaction?

    - by Steve Guidi
    I've written a linq-to-sql program that essentially performs an ETL task, and I've noticed many places where parallelization will improve its performance. However, I'm concerned about preventing uniquness constraint violations when two threads perform the following task (psuedo code). Record CreateRecord(string recordText) { using (MyDataContext database = GetDatabase()) { Record existingRecord = database.MyTable.FirstOrDefault(record.KeyPredicate()); if(existingRecord == null) { existingRecord = CreateRecord(recordText); database.MyTable.InsertOnSubmit(existingRecord); } database.SubmitChanges(); return existingRecord; } } In general, this code executes a SELECT statement to test for record existance, followed by an INSERT statement if the record doesn't exist. It is encapsulated by an implicit transaction. When two threads run this code for the same instance of recordText, I want to prevent them from simultaneously determining that the record doesn't exist, thereby both attempting to create the same record. An isolation level and explicit transaction will work well, except I'm not certain which isolation level I should use -- Serializable should work, but seems too strict. Is there a better choice?

    Read the article

  • Saving tags into a database table in CakePHP

    - by Cameron
    I have the following setup for my CakePHP app: Posts id title content Topics id title Topic_Posts id topic_id post_id So basically I have a table of Topics (tags) that are all unique and have an id. And then they can be attached to post using the Topic_Posts join table. When a user creates a new post they will fill in the topics by typing them in to a textarea separated by a comma which will then save these into the Topics table if they do not already exist and then save the references into the Topic_posts table. I have the models set up like so: Post model: class Post extends AppModel { public $name = 'Post'; public $hasAndBelongsToMany = array( 'Topic' => array('with' => 'TopicPost') ); } Topic model: class Topic extends AppModel { public $hasMany = array( 'TopicPost' ); } TopicPost model: class TopicPost extends AppModel { public $belongsTo = array( 'Topic', 'Post' ); } And for the New post method I have this so far: public function add() { if ($this->request->is('post')) { //$this->Post->create(); if ($this->Post->saveAll($this->request->data)) { // Redirect the user to the newly created post (pass the slug for performance) $this->redirect(array('controller'=>'posts','action'=>'view','id'=>$this->Post->id)); } else { $this->Session->setFlash('Server broke!'); } } } As you can see I have used saveAll but how do I go about dealing with the Topic data?

    Read the article

< Previous Page | 775 776 777 778 779 780 781 782 783 784 785 786  | Next Page >