Search Results

Search found 18409 results on 737 pages for 'large projects'.

Page 150/737 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Codeigniter image manipulation class rotates image during resize

    - by someoneinomaha
    I'm using Codeigniter's image manipulation library to re-size an uploaded image to three sizes, small, normal and large. The re-sizing is working great. However, if I'm resizing a vertical image, the library is rotating the image so it's horizontal. These are the config settings I have in place: $this->resize_config['image_library'] = 'gd2'; $this->resize_config['source_image'] = $this->file_data['full_path']; $this->resize_config['maintain_ratio'] = TRUE; // These change based on the type (small, normal, large) $this->resize_config['new_image'] = './uploads/large/'.$this->new_file_name.'.jpg'; $this->resize_config['width'] = 432; $this->resize_config['height'] = 288; I'm not setting the master_dim property because the default it set to auto, which is what I want. My assumption is that the library would take a vertical image, see that the height is greater than the width and translate the height/width config appropriately so the image remains vertical. What is happening (apparently) is that the library is rotating the image when it is vertical and sizing it per the configuration. This is the code in place I have to do the actual re-sizing: log_message('debug', 'attempting '.$size.' photo resize'); $this->CI->load->library('image_lib'); $this->CI->image_lib->initialize($this->resize_config); if ($this->CI->image_lib->resize()) { $return_value = TRUE; log_message('debug', $size.' photo resize successful'); } else { $this->errors[] = $this->CI->image_lib->display_errors(); log_message('debug', $size.' photo resize failed'); } $this->CI->image_lib->clear(); return $return_value;

    Read the article

  • VB.NET error: Microsoft.ACE.OLEDB.12.0 provider is not registered [RESOLVED]

    - by Azim
    I have a Visual Studio 2008 solution with two projects (a Word-Template project and a VB.Net console application for testing). Both projects reference a database project which opens a connection to an MS-Access 2007 database file and have references to System.Data.OleDb. In the database project I have a function which retrieves a data table as follows private class AdminDatabase ' stores the connection string which is set in the New() method dim strAdminConnection as string public sub New() ... adminName = dlgopen.FileName conAdminDB = New OleDbConnection conAdminDB.ConnectionString = "Data Source='" + adminName + "';" + _ "Provider=Microsoft.ACE.OLEDB.12.0" ' store the connection string in strAdminConnection strAdminConnection = conAdminDB.ConnectionString.ToString() My.Settings.SetUserOverride("AdminConnectionString", strAdminConnection) ... End Sub ' retrieves data from the database Public Function getDataTable(ByVal sqlStatement As String) As DataTable Dim ds As New DataSet Dim dt As New DataTable Dim da As New OleDbDataAdapter Dim localCon As New OleDbConnection localCon.ConnectionString = strAdminConnection Using localCon Dim command As OleDbCommand = localCon.CreateCommand() command.CommandText = sqlStatement localCon.Open() da.SelectCommand = command da.Fill(dt) getDataTable = dt End Using End Function End Class When I call this function from my Word 2007 Template project everything works fine; no errors. But when I run it from the console application it throws the following exception ex = {"The 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine."} Both projects have the same reference and the console application did work when I first wrote it (a while ago) but now it has stopped work. I must be missing something but I don't know what. Any ideas? Thanks Azim

    Read the article

  • Perl XML SAX parser emulating XML::Simple record for record

    - by DVK
    Short Q summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog - due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. What I'm looking for is an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been. I don't care much what the interface of the new module is - e.g whether I need to call next_record() or give it a callback coderef accepting a record.

    Read the article

  • How to pass additional convert options to paperclip on Heroku?

    - by Yuri
    UPD class User < ActiveRecord::Base Paperclip.options[:swallow_stderr] = false has_attached_file :photo, :styles => { :square => "100%", :large => "100%" }, :convert_options => { :square => "-auto-orient -geometry 70X70#", :large => "-auto-orient -geometry X300" }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":attachment/:id/:style.:extension", :bucket => 'mybucket' validates_attachment_size :photo, :less_than => 5.megabyte end Works great on local machine, but gives me an error on Heroku: There was an error processing the thumbnail for stream.20143 The thing is I want to auto-orient photos before resizing, so they resized properly. The only working variant now(thanks to jonnii) is resizing without auto-orient: ... as_attached_file :photo, :styles => { :square => "70X70#", :large => "X300" }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":attachment/:id/:style.:extension", :bucket => 'mybucket' ... How to pass additional convert options to paperclip on Heroku?

    Read the article

  • How to forward a 'saved' request stream to another Action within the same controller?

    - by Moe Howard
    We have a need to chunk-up large http requests sent by our mobile devices. These smaller chunk streams are merged to a file on the server. Once all chunks are received we need a way to submit the saved merged request to an another method(Action) within the same controller that will process this large http request. How can this be done? The code we tried below results in the service hanging. Is there a way to do this without a round-trip? //Open merged chunked file FileStream fileStream = new FileStream(fileName, FileMode.Open, FileAccess.Read); //Read steam support variables int bytesRead = 0; byte[] buffer = new byte[1024]; //Build New Web Request. The target Action is called "Upload", this method we are in is called "UploadChunk" HttpWebRequest webRequest; webRequest = (HttpWebRequest)WebRequest.Create(Request.Url.ToString().Replace("Chunk", string.Empty)); webRequest.Method = "POST"; webRequest.ContentType = "text/xml"; webRequest.KeepAlive = true; webRequest.Timeout = 600000; webRequest.ReadWriteTimeout = 600000; webRequest.Credentials = System.Net.CredentialCache.DefaultCredentials; Stream webStream = webRequest.GetRequestStream(); //Hangs here, no errors, just hangs I have looked into using RedirectToAction and RedirecctToRoute but these methods don't fit well with what we are looking to do as we cannot edit the Request.InputStream (as it is read-only) to carry out large request stream. Thanks, Moe

    Read the article

  • Average function without overflow exception

    - by Ron Klein
    .NET Framework 3.5. I'm trying to calculate the average of some pretty large numbers. For instance: using System; using System.Linq; class Program { static void Main(string[] args) { var items = new long[] { long.MinValue + 100, long.MinValue + 200, long.MinValue + 300 }; try { var avg = items.Average(); Console.WriteLine(avg); } catch (OverflowException ex) { Console.WriteLine("can't calculate that!"); } Console.ReadLine(); } } Obviously, the mathematical result is 9223372036854775607 (long.MaxValue - 200), but I get an exception there. This is because the implementation (on my machine) to the Average extension method, as inspected by .NET Reflector is: public static double Average(this IEnumerable<long> source) { if (source == null) { throw Error.ArgumentNull("source"); } long num = 0L; long num2 = 0L; foreach (long num3 in source) { num += num3; num2 += 1L; } if (num2 <= 0L) { throw Error.NoElements(); } return (((double) num) / ((double) num2)); } I know I can use a BigInt library (yes, I know that it is included in .NET Framework 4.0, but I'm tied to 3.5). But I still wonder if there's a pretty straight forward implementation of calculating the average of integers without an external library. Do you happen to know about such implementation? Thanks!! UPDATE: The previous example, of three large integers, was just an example to illustrate the overflow issue. The question is about calculating an average of any set of numbers which might sum to a large number that exceeds the type's max value. Sorry about this confusion. I also changed the question's title to avoid additional confusion. Thanks all!!

    Read the article

  • 'Must Override a Superclass Method' Errors after importing a project into Eclipse

    - by Tim H
    Anytime I have to re-import my projects into Eclipse (if I reinstalled Eclipse, or changed the location of the projects), almost all of my overridden methods are not formatted correctly, causing the error 'The method ?????????? must override a superclass method'. It may be noteworthy to mention this is with Android projects - for whatever reason, the method argument values are not always populated, so I have to manually populate them myself. For instance: list.setOnCreateContextMenuListener(new OnCreateContextMenuListener() { public void onCreateContextMenu(ContextMenu menu, View v, ContextMenuInfo menuInfo) { //These arguments have their correct names } }); will be initially populated like this: list.setOnCreateContextMenuListener(new OnCreateContextMenuListener() { public void onCreateContextMenu(ContextMenu arg1, View arg2, ContextMenuInfo arg3) { //This methods arguments were not automatically provided } }); The odd thing is, if I remove my code, and have Eclipse automatically recreate the method, it uses the same argument names I already had, so I don't really know where the problem is, other then it auto-formatting the method for me. This becomes quite a pain having to manually recreate ALL my overridden methods by hand. If anyone can explain why this happens or how to fix it .. I would be very happy. Maybe it is due to the way I am formatting the methods, which are inside an argument of another method?

    Read the article

  • Java equivalent to VS solution file

    - by Chris
    I'm a C# guy trying to learn Java. I understand the syntax and the basic architecture of the Java platform, and have no problem doing smaller projects myself, but I'd really like to be able to download some open source projects to learn from the work of others. However, I'm running into a stumbling block that I can't seem to find any information on. When I download an open source .NET project, I can open the .sln file with visual studio and everything just loads. Sure, there's occasionally a missing reference or something, but there's really very little configuration required to get things going. I'm not sensing the same ease of use with Java. I'm using eclipse at the moment, and it feels like for every project I have to create a brand new Eclipse project using "create from existing source", and almost nothing compiles properly without significant reconfiguration. In the case of web projects, it's even worse, because Eclipse doesn't appear to support creating a web project from existing source. I have to create a standard Java project from source, then then apparently modify the project file to include the bindings for the web toolkit stuff to work properly. Assuming I want to be able to contribute to a project later on, I shouldn't have to be making such drastic changes to the file structure to get my IDE to a workable state. What am I missing?

    Read the article

  • Experience migrating legacy Cobol/PL1 to Java

    - by MadMurf
    ORIGINAL Q: I'm wondering if anyone has had experience of migrating a large Cobol/PL1 codebase to Java? How automated was the process and how maintainable was the output? How did the move from transactional to OO work out? Any lessons learned along the way or resources/white papers that may be of benefit would be appreciated. EDIT 7/7: Certainly the NACA approach is interesting, the ability to continue making your BAU changes to the COBOL code right up to the point of releasing the JAVA version has merit for any organization. The argument for procedural Java in the same layout as the COBOL to give the coders a sense of comfort while familiarizing with the Java language is a valid argument for a large organisation with a large code base. As @Didier points out the $3mil annual saving gives scope for generous padding on any BAU changes going forward to refactor the code on an ongoing basis. As he puts it if you care about your people you find a way to keep them happy while gradually challenging them. The problem as I see it with the suggestion from @duffymo to Best to try and really understand the problem at its roots and re-express it as an object-oriented system is that if you have any BAU changes ongoing then during the LONG project lifetime of coding your new OO system you end up coding & testing changes on the double. That is a major benefit of the NACA approach. I've had some experience of migrating Client-Server applications to a web implementation and this was one of the major issues we encountered, constantly shifting requirements due to BAU changes. It made PM & scheduling a real challenge. Thanks to @hhafez who's experience is nicely put as "similar but slightly different" and has had a reasonably satisfactory experience of an automatic code migration from Ada to Java. Thanks @Didier for contributing, I'm still studying your approach and if I have any Q's I'll drop you a line.

    Read the article

  • Android Layout: Display as much ImageViews as possible without scrolling

    - by Toni4780
    I am working on an app which should display several same size images on the screen. But it should only display only so much images as possible without offering scrolling. E.g. On a "big" tablet it could display 10x10 Imageviews (screen is large, so there is much space for pictures) On a "big" phone there might be enough space to display 6x6 ImageViews, so it should only display a 6x6 array of images. On a small phone there is propably only space for 4x4 ImageViews, so it should only display this. How can I make this in Android? I know about "layout-large", ... but if i make a special fixed xml-layout for a "large" device, it would not fit all devices correct. E.g. a Galaxy Nexus is a "normal" device and so is a Nexus One, but there would be at least be space for one or two more imageview rows on a Galaxy Nexus than on a Nexus One. So do I have to measure in code somehow how big the resolution is and display some TableRows accordingly? Or is there a special way how I can manage this?

    Read the article

  • PHP cURL JSON Decode (X-AUTH Header)

    - by TheCyX
    <?php // Show Profile $curl = curl_init(); curl_setopt ($curl, CURLOPT_URL, "https://example/api"); curl_setopt ($curl, CURLOPT_RETURNTRANSFER, true); curl_setopt ($curl, CURLOPT_HTTPAUTH, CURLAUTH_BASIC ) ; curl_setopt ($curl, CURLOPT_HTTPHEADER, array('X-AUTH: 123456789')); $projects = curl_exec($curl); // This is empty? echo $projects; //Decode $phpArray = json_decode($projects); print_r($phpArray); foreach ($phpArray as $key => $value) { // Line 17, sure its empty, but why? echo "<p>$key | $value</p>"; } ?> Warning: Invalid argument supplied for foreach() in /html/api.php on line 17 The API needs this authentification: $ curl -i -H "X-AUTH: 123456789" https://example/api JSON File: {"id":"123456","hostId":null,"Nickname":"thecyx","DisplayName":"thecyx","AppDisplayName":"thecyx","Score":"300","Account":"Full"} The $project variable is empty. If I'm posting the API Url in the Broswer its working. And, if possible, what's the correct way to get the JSON Data e.g. [Nickname],[Score]?

    Read the article

  • Getting Visual Studio macros in console app

    - by Paul Steckler
    In a Visual Studio extension, you can get the default include paths for all projects with C# code like: String dirs = dte2.get_Properties("Projects", "VCDirectories"); where dte2 is the Visual Studio application object. Usually, those directories contain macros like $(INCLUDE). You can expand those macros by looking at dte2.Solution.Projects, finding the relevant project in that collection; from the project, look at project.Configurations, find the relevant configuration, and call its Evaluate method. In VS2005/VS2008, there's a .vssettings file that contains the VCDirectories. In VS2010, there's a property sheet with the same information. A console application can just parse those files -- great. But how can you expand the macros? As a first step, I tried instantiating a VCProjectEngine object in a console app, but that just resulted in a COM failure. So I don't know how to instantiate a VCProject object in order to follow the same strategy I used in a VS extension. Where are the macro bindings stored?

    Read the article

  • AjaxControlToolkit Resource Files Not Copied To Output in MSBuild Script

    - by Dario Solera
    I'm new to MSBuild, but I managed to setup the following simple script: <Project ToolsVersion="3.5" DefaultTargets="Compile" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <Configuration Condition="'$(Configuration)' == ''">Debug</Configuration> </PropertyGroup> <ItemGroup> <SolutionRoot Include=".." /> <BuildArtifacts Include=".\Artifacts\" /> <SolutionFile Include="..\SolutionName.sln" /> </ItemGroup> <Target Name="Clean"> <RemoveDir Directories="@(BuildArtifacts)" /> </Target> <Target Name="Init" DependsOnTargets="Clean"> <MakeDir Directories="@(BuildArtifacts)" /> </Target> <Target Name="Compile" DependsOnTargets="Init"> <MSBuild Projects="@(SolutionFile)" Properties="OutDir=%(BuildArtifacts.FullPath);Configuration=$(Configuration)" /> <MakeDir Directories="%(BuildArtifacts.FullPath)\_PublishedWebsites\RDE.XAP.UnifiedGui.Web\Temp" /> </Target> </Project> The solution has 23 projects, 4 of which are WebApps. Now, the script works fine and the output is generated correctly. The only problem I counter is with two WebApp projects in the solution that use the AJAX Control Toolkit. The toolkit has a set of folders (e.g. ar, it, es, fr) that contain localized resources. These folders are not copied in the bin directory of the WebApps when the solution is built in MSBuild, but they are copied when it is built in Visual Studio. How can I solve this in a clean manner? I know I could write a (quite convoluted) task that copies the directories after the compile, but it does not seem the right solution to me. Also, neither Google, SO and MSDN could provide more details on this kind of issue.

    Read the article

  • sqrt(int_value + 0.0) -- Does it have a purpose?

    - by Earlz
    Hello, while doing some homework in my very strange C++ book, which I've been told before to throw away, had a very peculiar code segment. I know homework stuff always throws in extra "mystery" to try to confuse you like indenting 2 lines after a single-statement for-loop. But this one I'm confused on because it seems to serve some real-purpose. basically it is like this: int counter=10; ... if(pow(floor(sqrt(counter+0.0)),2) == counter) ... I'm interested in this part especially: sqrt(counter+0.0) Is there some purpose to the +0.0? Is this the poormans way of doing a static cast to a double? Does this avoid some compiler warning on some compiler I do not use? The entire program printed the exact same thing and compiled without warnings on g++ whenever I left out the +0.0 part. Maybe I'm not using a weird enough compiler? Edit: Also, does gcc just break standard and not make an error for Ambiguous reference since sqrt can take 3 different types of parameters? [earlz@EarlzBeta-~/projects/homework1] $ cat calc.cpp #include <cmath> int main(){ int counter=0; sqrt(counter); } [earlz@EarlzBeta-~/projects/homework1] $ g++ calc.cpp /usr/lib/libstdc++.so.47.0: warning: strcpy() is almost always misused, please use strlcpy() /usr/lib/libstdc++.so.47.0: warning: strcat() is almost always misused, please use strlcat() [earlz@EarlzBeta-~/projects/homework1] $

    Read the article

  • Unresolved external symbol - MySQL API C++

    - by Zack_074
    Hey I've had nonstop problems with SQL. I'm trying to get some experience because I know it's a vital part of the industry. I got it working with C#, but now I'm working on connecting to a database in c++. I have the project properly linked and what not. Here's my code and the errors I'm getting. #include "stdafx.h" #include <mysql.h> #include <iostream> MYSQL mysql; MYSQL_RES result; using namespace std; int _tmain(int argc, _TCHAR* argv[]) { mysql_init(&mysql); if(!mysql_real_connect(&mysql, "localhost", "root", "angel552002", "MyDatabse", 0, NULL, 0)) { printf("Failed to connect"); } return 0; } and the errors: Error 1 error LNK2001: unresolved external symbol _mysql_real_connect@32 c:\Users\Zack-074\documents\visual studio 2010\Projects\MySql\MySql\MySql.obj Error 2 error LNK2001: unresolved external symbol _mysql_init@4 c:\Users\Zack-074\documents\visual studio 2010\Projects\MySql\MySql\MySql.obj Error 3 error LNK1120: 2 unresolved externals c:\users\zack-074\documents\visual studio 2010\Projects\MySql\Debug\MySql.exe 1 I really appreciate the help.

    Read the article

  • Perl XML SAX parser emulating XML::Simple record for record

    - by DVK
    Short Q summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog - due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. What I'm looking for is an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been.

    Read the article

  • Should programmers do Pro Bono work? where are the code public defenders?

    - by Tj Kellie
    How many projects are people doing based on the Bro Bono publico ideals versus working for the highest wage or potential for a cash-in-buy-out payday? For years lawyers have been called out for excessive gathering of wealth from high bill rates and huge settlement deals, hiring out their knowledge and skills to the highest bidders. People call for them to do more for free, use the laws and their time to defend or further some cause thats in the public's best interest. Is professional software development that different? So many bright people and so much knowledge of complex systems. Do you think that there is enough of a "Pro Bono" movement to solve the social and public problems in the industry right now? If so what are the examples to point to? OLPC? NOTE: Saying that open source software is the same as pro bono misses the point completely. I was looking for specific projects with a social context, not just group-sourcing for free software. Just because your not making anyone pay for your software does not mean its doing anyone any good. I'm not calling out manual enforcement of pro bono work for programmers, really just want some objective opinions and concrete examples of social-minded software/tech development projects like the One Laptop Per Child project. I'm sure open source would be a natural tie-in for some.

    Read the article

  • GenerateDSYMFile warning: unable to open object file

    - by regulus6633
    The background: I have a project that I last built on 10.5 on a PPC computer using xcode v3.1. It builds against the 10.4 SDK. I now have a MacBook with 10.6 on it and Xcode v3.2.1. I installed the 10.4 SDK with xcode. So now I want to build the project on an intel chip on 10.6. I first get a build error because I have the wrong version of gcc setup so I change the build settings to use gcc 4.0. The problem: Now when I build the project I get the following warning: GenerateDSYMFile "build/Release/What's Keeping Me?.app.dSYM" "build/Release/What's Keeping Me?.app/Contents/MacOS/What's Keeping Me?" cd "/Users/hmcshane/Development/ Cocoa Projects/What's Keeping Me?" /Developer/usr/bin/dsymutil "/Users/hmcshane/Development/ Cocoa Projects/What's Keeping Me?/build/Release/What's Keeping Me?.app/Contents/MacOS/What's Keeping Me?" -o "/Users/hmcshane/Development/ Cocoa Projects/What's Keeping Me?/build/Release/What's Keeping Me?.app.dSYM" warning: (i386) /Users/hmcshane/Downloads/Csu-71/crt.dynamic_no_pic.o unable to open object file warning: (ppc7400) /Users/hmcshane/Downloads/Csu-71/crt.dynamic_no_pic.o unable to open object file Any idea what this is? And why is the path for the problem files rooted in my downloads folder? The project certainly doesn't reside there.

    Read the article

  • compact Number formatting behavior in Java (automatically switch between decimal and scientific notation)

    - by kostmo
    I am looking for a way to format a floating point number dynamically in either standard decimal format or scientific notation, depending on the value of the number. For moderate magnitudes, the number should be formatted as a decimal with trailing zeros suppressed. If the floating point number is equal to an integral value, the decimal point should also be suppressed. For extreme magnitudes (very small or very large), the number should be expressed in scientific notation. Alternately stated, if the number of characters in the expression as standard decimal notation exceeds a certain threshold, switch to scientific notation. I should have control over the maximum number of digits of precision, but I don't want trailing zeros appended to express the minimum precision; all trailing zeros should be suppressed. Basically, it should optimize for compactness and readability. 2.80000 - 2.8 765.000000 - 765 0.0073943162953 - 0.00739432 (limit digits of precision—to 6 in this case) 0.0000073943162953 - 7.39432E-6 (switch to scientific notation if the magnitude is small enough—less than 1E-5 in this case) 7394316295300000 - 7.39432E+6 (switch to scientific notation if the magnitude is large enough—for example, when greater than 1E+10) 0.0000073900000000 - 7.39E-6 (strip trailing zeros from significand in scientific notation) 0.000007299998344 - 7.3E-6 (rounding from the 6-digit precision limit causes this number to have trailing zeros which are stripped) Here's what I've found so far: The .toString() method of the Number class does most of what I want, except it doesn't upconvert to integer representation when possible, and it will not express large integral magnitudes in scientific notation. Also, I'm not sure how to adjust the precision. The "%G" format string to the String.format(...) function allows me to express numbers in scientific notation with adjustable precision, but does not strip trailing zeros. I'm wondering if there's already some library function out there that meets these criteria. I guess the only stumbling block for writing this myself is having to strip the trailing zeros from the significand in scientific notation produced by %G.

    Read the article

  • calloc v/s malloc and time efficiency

    - by yCalleecharan
    Hi, I've read with interest the post "c difference between malloc and calloc". I'm using malloc in my code and would like to know what difference I'll have using calloc instead. My present (pseudo)code with malloc: Scenario 1 int main() { allocate large arrays with malloc INITIALIZE ALL ARRAY ELEMENTS TO ZERO for loop //say 1000 times do something and write results to arrays end for loop FREE ARRAYS with free command } //end main If I use calloc instead of malloc, then I'll have: Scenario2 int main() { for loop //say 1000 times ALLOCATION OF ARRAYS WITH CALLOC do something and write results to arrays FREE ARRAYS with free command end for loop } //end main I have three questions: Which of the scenarios is more efficient if the arrays are very large? Which of the scenarios will be more time efficient if the arrays are very large? In both scenarios,I'm just writing to arrays in the sense that for any given iteration in the for loop, I'm writing each array sequentially from the first element to the last element. The important question: If I'm using malloc as in scenario 1, then is it necessary that I initialize the elements to zero? Say with malloc I have array z = [garbage1, garbage2, garbage 3]. For each iteration, I'm writing elements sequentially i.e. in the first iteration I get z =[some_result, garbage2, garbage3], in the second iteration I get in the first iteration I get z =[some_result, another_result, garbage3] and so on, then do I need specifically to initialize my arrays after malloc?

    Read the article

  • On Ruby on Rails, <%= or <% should only matter whether it is show or no show, but why will it give

    - by Jian Lin
    The following code: <div id="vote_form"> <%= form_remote_tag :url => story_votes_path(@story) do %> <%= submit_tag 'shove it' %> <% end %> </div> gives compilation error while if the first <%= is replaced with <%, then everything works. I thought they only differ by "show" or "not show", but why will it actually cause a compile error? The error is: > SyntaxError in Stories#show > > Showing > app/views/stories/show.html.erb where > line #17 raised: > > compile error C:/Software > Projects/ror/shov12/app/views/stories/show.html.erb:17: > syntax error, unexpected ')' ... > story_votes_path(@story) do ).to_s); > @output_buffer.concat ... > ^ C:/Software > Projects/ror/shov12/app/views/stories/show.html.erb:23: > syntax error, unexpected kENSURE, > expecting ')' C:/Software > Projects/ror/shov12/app/views/stories/show.html.erb:25: > syntax error, unexpected kEND, > expecting ')'

    Read the article

  • parameter for xcodebuild for using latest sdk.

    - by Maciek Sawicki
    I using ant exec task to execute xcodebuild to build some iOS projects hudson. I would like to be able to crate script that way that allows not to specify sdk version, because after updating sdk on hudson slave or my iOS projects all my projects failing.... There is is nice option in xcode since sdk 4.2 in target setup Base SDK - Latest iOS and I don't have to provide -sdk param in xcodebuild command, but then (i think) it's taken from xcode project and it's bad because then some one can change target from simulator to device accidentally during commit. I need something that is constant. I will prefer not to use env variable because I would like to be able to run this ant task also on dev machines and would like not have to renember about setting it on all machines. Unfortunately xcodebuild -showsdk gives only: Mac OS X SDKs: Mac OS X 10.4 -sdk macosx10.4 Mac OS X 10.5 -sdk macosx10.5 Mac OS X 10.6 -sdk macosx10.6 iOS SDKs: iOS 4.2 -sdk iphoneos4.2 iOS Simulator SDKs: Simulator - iOS 3.2 -sdk iphonesimulator3.2 Simulator - iOS 4.0 -sdk iphonesimulator4.0 Simulator - iOS 4.1 -sdk iphonesimulator4.1 Simulator - iOS 4.2 -sdk iphonesimulator4.2 I need something like -sdk iphoneosLatest. My only idea is to pare output of xcodebuild -showsdk with some script, but I don't like this idea.

    Read the article

  • My OpenCL kernel is slower on faster hardware.. But why?

    - by matdumsa
    Hi folks, As I was finishing coding my project for a multicore programming class I came up upon something really weird I wanted to discuss with you. We were asked to create any program that would show significant improvement in being programmed for a multi-core platform. I’ve decided to try and code something on the GPU to try out OpenCL. I’ve chosen the matrix convolution problem since I’m quite familiar with it (I’ve parallelized it before with open_mpi with great speedup for large images). So here it is, I select a large GIF file (2.5 MB) [2816X2112] and I run the sequential version (original code) and I get an average of 15.3 seconds. I then run the new OpenCL version I just wrote on my MBP integrated GeForce 9400M and I get timings of 1.26s in average.. So far so good, it’s a speedup of 12X!! But now I go in my energy saver panel to turn on the “Graphic Performance Mode” That mode turns off the GeForce 9400M and turns on the Geforce 9600M GT my system has. Apple says this card is twice as fast as the integrated one. Guess what, my timing using the kick-ass graphic card are 3.2 seconds in average… My 9600M GT seems to be more than two times slower than the 9400M.. For those of you that are OpenCL inclined, I copy all data to remote buffers before starting, so the actual computation doesn’t require roundtrip to main ram. Also, I let OpenCL determine the optimal local-worksize as I’ve read they’ve done a pretty good implementation at figuring that parameter out.. Anyone has a clue? edit: full source code with makefiles here http://www.mathieusavard.info/convolution.zip cd gimage make cd ../clconvolute make put a large input.gif in clconvolute and run it to see results

    Read the article

  • Does query plan optimizer works well with joined/filtered table-valued functions?

    - by smoothdeveloper
    In SQLSERVER 2005, I'm using table-valued function as a convenient way to perform arbitrary aggregation on subset data from large table (passing date range or such parameters). I'm using theses inside larger queries as joined computations and I'm wondering if the query plan optimizer work well with them in every condition or if I'm better to unnest such computation in my larger queries. Does query plan optimizer unnest table-valued functions if it make sense? If it doesn't, what do you recommend to avoid code duplication that would occur by manually unnesting them? If it does, how do you identify that from the execution plan? code sample: create table dbo.customers ( [key] uniqueidentifier , constraint pk_dbo_customers primary key ([key]) ) go /* assume large amount of data */ create table dbo.point_of_sales ( [key] uniqueidentifier , customer_key uniqueidentifier , constraint pk_dbo_point_of_sales primary key ([key]) ) go create table dbo.product_ranges ( [key] uniqueidentifier , constraint pk_dbo_product_ranges primary key ([key]) ) go create table dbo.products ( [key] uniqueidentifier , product_range_key uniqueidentifier , release_date datetime , constraint pk_dbo_products primary key ([key]) , constraint fk_dbo_products_product_range_key foreign key (product_range_key) references dbo.product_ranges ([key]) ) go . /* assume large amount of data */ create table dbo.sales_history ( [key] uniqueidentifier , product_key uniqueidentifier , point_of_sale_key uniqueidentifier , accounting_date datetime , amount money , quantity int , constraint pk_dbo_sales_history primary key ([key]) , constraint fk_dbo_sales_history_product_key foreign key (product_key) references dbo.products ([key]) , constraint fk_dbo_sales_history_point_of_sale_key foreign key (point_of_sale_key) references dbo.point_of_sales ([key]) ) go create function dbo.f_sales_history_..snip.._date_range ( @accountingdatelowerbound datetime, @accountingdateupperbound datetime ) returns table as return ( select pos.customer_key , sh.product_key , sum(sh.amount) amount , sum(sh.quantity) quantity from dbo.point_of_sales pos inner join dbo.sales_history sh on sh.point_of_sale_key = pos.[key] where sh.accounting_date between @accountingdatelowerbound and @accountingdateupperbound group by pos.customer_key , sh.product_key ) go -- TODO: insert some data -- this is a table containing a selection of product ranges declare @selectedproductranges table([key] uniqueidentifier) -- this is a table containing a selection of customers declare @selectedcustomers table([key] uniqueidentifier) declare @low datetime , @up datetime -- TODO: set top query parameters . select saleshistory.customer_key , saleshistory.product_key , saleshistory.amount , saleshistory.quantity from dbo.products p inner join @selectedproductranges productrangeselection on p.product_range_key = productrangeselection.[key] inner join @selectedcustomers customerselection on 1 = 1 inner join dbo.f_sales_history_..snip.._date_range(@low, @up) saleshistory on saleshistory.product_key = p.[key] and saleshistory.customer_key = customerselection.[key] I hope the sample makes sense. Much thanks for your help!

    Read the article

  • How to insert an Array/Objet into SQL (bestpractice)

    - by Jason
    I need to store three items as an array in a single column and be able to quickly/easily modify that data in later functions. [---YOU CAN SKIP THIS PART IF YOU TRUST ME--] To be clear, I love and use x_ref tables all the time but an x_ref doesn't work here because this is not a one-to-many relationship. I am making a project management tool that among other things, assigns a user to a project and assigns hours to that project on a weekly basis, per user, sometimes for weeks many weeks into the future. Of course there are many projects, a project can have many team members, a team member can be involved with many projects at one time BUT its not one-to-many because a team member can be working many weeks on the same project but have different hours for different weeks. In other words, each object really is unique. Also/finally, this data can be changed at any time by any team-member - hence it needs to be easily to manipulate. Basically, I need to handle three values (the team member, the week we're talking about, and how many hours) dropped into a project row in the projects table (under the column for project team members) and treated as one item - a team member - that will actually be part of a larger array of all the team members involved on the project. [--END SKIP, START READING HERE :) --] So assuming that the application's general schema and relation tables aren't total crap and that we are in fact up against a wall in this one case to use an array/object as a value for this column, is there a best practice for that? Like a particular SQL data-type? A particular object/array format? CSV? JSON? XML? Most of the app is in C# but (for very odd reasons that I won't explain) we could really use any environment if there is a particular one that handles this well. For the moment, I am thinking either (webservice + JS/JSON) or PHP unserialize/serialize (but I am bit sketched out by the PHP solution because it seems a bit cumbersome when using ajax?) Thoughts anyone?

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >