Search Results

Search found 24734 results on 990 pages for 'floating point conversion'.

Page 273/990 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • Android: How to periodically send location to a server

    - by Mark
    Hi, I am running a Web service that allows users to record their trips (kind of like Google's MyTracks) as part of a larger app. The thing is that it is easy to pass data, including coords and other items, to the server when a user starts a trip or ends it. Being a newbie, I am not sure how to set up a background service that sends the location updates once every (pre-determined) period (min 3 minutes, max 1 hr) until the user flags the end of the trip, or until a preset amount of time elapses. Once the trip is started from the phone, the server responds with a polling period for the phone to use as the interval between updates. This part works, in that I can display the response on the phone, and my server registers the user's action. Similarly, the trip is closed server-side upon the close trip request. However, when I tried starting a periodic tracking method from inside the StartTrack Activity, using requestLocationUpdates(String provider, long minTime, float minDistance, LocationListener listener) where minTime is the poll period from the server, it just did not work, and I'm not getting any errors. So it means I'm clueless at this point, never having used Android before. I have seen many posts here on using background services with handlers, pending intents, and other things to do similar stuff, but I really don't understand how to do it. I would like the user to do other stuff on the phone while the updates are going on, so if you guys could point me to a tutorial that shows how to actually write background services (maybe these run as separate classes?) or other ways of doing this, that would be great. Thanks!

    Read the article

  • how to copy char * into a string and vice-versa

    - by user295030
    If i pass a char * into a function. I want to then take that char * convert it to a std::string and once I get my result convert it back to char * from a std::string to show the result. I don't know how to do this for conversion ( I am not talking const char * but just char *) I am not sure how to manipulate the value of the pointer I send in. so steps i need to do take in a char * convert it into a string. take the result of that string and put it back in the form of a char * return the result such that the value should be available outside the function and not get destroyed. If possible can i see how it could be done via reference vs a pointer (whose address I pass in by value however I can still modify the value that pointer is pointing to. so even though the copy of the pointer address in the function gets destroyed i still see the changed value outside. thanks!

    Read the article

  • Pointer to a C++ class member function as a global function's parameter?

    - by marcin1400
    I have got a problem with calling a global function, which takes a pointer to a function as a parameter. Here is the declaration of the global function: int lmdif ( minpack_func_mn fcn, void *p, int m, int n, double *x, double *fvec, double ftol) The "minpack_func_mn" symbol is a typedef for a pointer to a function, defined as: typedef int (*minpack_func_mn)(void *p, int m, int n, const double *x, double *fvec, int iflag ); I want to call the "lmdif" function with a pointer to a function which is a member of a class I created, and here is the declaration of this class function: int LT_Calibrator::fcn(void *p, int m, int n, const double *x, double *fvec,int iflag) I am calling a global function like this: info=lmdif(&LT_Calibrator::fcn, 0, m, n, x, fvec, ftol) Unfortunately, I get a compiler error, which says: "error C2664: 'lmdif' : cannot convert parameter 1 from 'int (__thiscall LT_Calibrator::* )(void *,int,int,const double *,double *,int)' to 'minpack_func_mn' 1 There is no context in which this conversion is possible" Is there any way to solve that problem?

    Read the article

  • Convert Date with characters to mm/dd/yyyy

    - by peter
    I have a columns called Submit_Date in table Tickets and the datatype of it is Varchar(200) and I am trying to convert it to MM/DD/YYYY format and when i do that i get the following error: Msg 242, Level 16, State 3, Line 1 The conversion of a varchar data type to a datetime data type resulted in an out-of-range value. Sample Data of the table is: Submit_Date 27-09-2013 16:15:00 CST 30-09-2013 16:30:24 CST 27-09-2013 10:03:46 CST 30-09-2013 14:35:55 CST 25-09-2013 16:28:48 CST 24-09-2013 09:29:45 CST I tried doing the following: Select Convert(datetime,Submit_date,101) from dbo.Tickets Let me know where I am doing wrong.

    Read the article

  • Best practice for managing changes to 3rd party open source libraries?

    - by Jeff Knecht
    On a recent project, I had to modify an open source library to address a functional deficiency. I followed the SVN best practice of creating a "vendor source" repository and made my changes there. I also submitted the patch to the mailing list of that project. Unfortunately, the project only has a couple of maintainers and they are very slow to commit updates. At some point, I expect the library to be updated, and I expect that my project will want to use the upgraded library. But now I have a potential problem... I don't know whether my patch will have been applied to this future release of the 3rd party library. I also don't know whether my patch will even still be compatible with the internal implementation of the upgraded components. And in all likelihood, someone else will be maintaining my project by that point. Should I name the library in a special way so it is clear that we made special modifications (eg. commons-lang-2.x-for-my-project.jar)? Should I just document the patch and reference the SVN location and a link to the mailing list item in a README? No option that I can think of seems to be fool-proof in an upgrade scenario. What is the best practice for this?

    Read the article

  • JPA returning null for deleted items from a set

    - by Jon
    This may be related to my question from a few days ago, but I'm not even sure how to explain this part. (It's an entirely different parent-child relationship.) In my interface, I have a set of attributes (Attribute) and valid values (ValidValue) for each one in a one-to-many relationship. In the Spring MVC frontend, I have a page for an administrator to edit these values. Once it's submitted, if any of these fields (as <input> tags) are blank, I remove the ValidValue object like so: Set<ValidValue> existingValues = new HashSet<ValidValue>(attribute.getValidValues()); Set<ValidValue> finalValues = new HashSet<ValidValue>(); for(ValidValue validValue : attribute.getValidValues()) { if(!validValue.getValue().isEmpty()) { finalValues.add(validValue); } } existingValues.removeAll(finalValues); for(ValidValue removedValue : existingValues) { getApplicationDataService().removeValidValue(removedValue); } attribute.setValidValues(finalValues); getApplicationDataService().modifyAttribute(attribute); The problem is that while the database is updated appropriately, the next time I query for the Attribute objects, they're returned with an extra entry in their ValidValue set -- a null, and thus, the next time I iterate through the values to display, it shows an extra blank value in the middle. I've confirmed that this happens at the point of a merge or find, at the point of "Execute query ReadObjectQuery(entity.Attribute). Here's the code I'm using to modify the database (in the ApplicationDataService): public void modifyAttribute(Attribute attribute) { getJpaTemplate().merge(attribute); } public void removeValidValue(ValidValue removedValue) { ValidValue merged = getJpaTemplate().merge(removedValue); getJpaTemplate().remove(merged); } Here are the relevant parts of the entity classes: Entity @Table(name = "attribute") public class Attribute { @OneToMany(cascade = CascadeType.ALL, fetch = FetchType.LAZY, mappedBy = "attribute") private Set<ValidValue> validValues = new HashSet<ValidValue>(0); } @Entity @Table(name = "valid_value") public class ValidValue { @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "attr_id", nullable = false) private Attribute attribute; }

    Read the article

  • Are there any NOSQL-compatible CMS projects?

    - by Michael
    This question is partially related to an older question (Any CMS is Google App Engine compatible?) , but is slightly more general. It seems that in most CMS systems, the most fragile failure point is the database. Traditional database implementations scale poorly and will never be able to handle unforeseen spikes of traffic. Since Google App Engine was designed to help even small businesses overcome that problem, I had the same question that was asked earlier this year with less than satisfactory answers. But more generally, where are the CMS projects that support NOSQL databases? Looking over Wikipedia's list of CMS platforms, I see without much effort that only traditional RDBMS are supported by every single vendor on the list. I would have expected to see at least one or two projects handling CouchDB or similar engines. I understand the complexities of implementing a NOSQL solution to a problem that is typically solved using the relations cleanly expressed in any RDBMS, but there seems to be a rather wide market gap. Since databases are, today, easily outsourced to Google, Amazon, and others which use NOSQL models, I am amazed that there are not more projects actively pursuing this path. Am I simply not aware? Can someone please point me to projects that have real momentum that are developing on this path? I'm looking for two things: a CMS that has as its backbone a NOSQL database enabling easy database outsourcing (hosted MySQL clusters and similar solutions are not what I'm looking for) a project that is built to run on either a PaaS architecture like Google App Engine or an IaaS architecture like Amazon EC2 Any pointers in that direction would be most welcome.

    Read the article

  • user height and weight in sql

    - by Samuel
    We are planning to capture a user's height and weight and am looking for ideas on representing them in sql. I have the following questions in mind weight can be expressed in kilograms and grams and height in meters and centimeters, so should I capture them as a BigDecimal with an appropriate precision and scale or capture them as vanilla strings and do the manipulation in the user interface. Note: I am planning to capture the kilograms and grams separately in the user interface. should the metric of measurement be part of the sql (i.e. the end user might want to view this information in pounds, inches according to his preference) OR Should I just support kilograms / meters in the database and do the conversion while showing this in the user interface

    Read the article

  • In Java, is there a gain in using interfaces for complex models?

    - by Gnoupi
    The title is hardly understandable, but I'm not sure how to summarize that another way. Any edit to clarify is welcome. I have been told, and recommended to use interfaces to improve performances, even in a case which doesn't especially call for the regular "interface" role. In this case, the objects are big models (in a MVC meaning), with many methods and fields. The "good use" that has been recommended to me is to create an interface, with its unique implementation. There won't be any other class implementing this interface, for sure. I have been told that this is better to do so, because it "exposes less" (or something close) to the other classes which will use methods from this class, as these objects are referring to the object from its interface (all public method from the implementation being reproduced in the interface). This seems quite strange to me, as it seems like a C++ use to me (with header files). There I see the point, but in Java? Is there really a point in making an interface for such unique implementation? I would really appreciate some clarifications on the topic, so I could justify not following such kind of behavior, and the hassle it creates from duplicating all declarations.

    Read the article

  • Quick, Beginner C++ Overloading Question - Getting the compiler to perceive << is defined for a spec

    - by Francisco P.
    Hello everyone. I edited a post of mine so I coul I overloaded << for a class, Score (defined in score.h), in score.cpp. ostream& operator<< (ostream & os, const Score & right) { os << right.getPoints() << " " << right.scoreGetName(); return os; } (getPoints fetches an int attribute, getName a string one) I get this compiling error for a test in main(), contained in main.cpp binary '<<' : no operator found which takes a right-hand operand of type 'Score' (or there is no acceptable conversion) How come the compiler doesn't 'recognize' that overload as valid? (includes are proper) Thanks for your time.

    Read the article

  • Why is my Android app force closing when I try to check if an EditText has a double

    - by user336861
    Scanner scanner = new Scanner(lapsPerMile_st); if (!scanner.hasNextDouble()) { Context context = getApplicationContext(); String msg = "Please Enter Digits and Decmials Only"; int duration = Toast.LENGTH_LONG; Toast.makeText(context, msg, duration).show(); lapsPerMileEditText.setText(""); return; } else { //Edit box has only digits, Set data and display stats data.setLapsPerMile(Integer.parseInt(lapsPerMile_st)); lapsRunLabel.setVisibility(0); lapsRunTextView.setText(Integer.toString(data.getLapsRun())); milesRunLabel.setVisibility(0); milesRunTextView.setText(Double.toString(data.getLapsRun()/data.getLapsPerMile())); } <EditText android:id="@+id/mileCount" android:layout_width="100dp" android:layout_height="wrap_content" android:layout_marginTop="110dp" android:inputType="numberDecimal" android:maxLength="4" /> For some reason if I enter a non decimal number such as 3, or 5, it works fine but when I enter a floating point such as 3.4 or 5.8 it force closes. I cant seem to figure out whats going on. Any ideas? Thanks

    Read the article

  • How to use a TFileStream to read 2D matrices into dynamic array?

    - by Robert Frank
    I need to read a large (2000x2000) matrix of binary data from a file into a dynamic array with Delphi 2010. I don't know the dimensions until run-time. I've never read raw data like this, and don't know IEEE so I'm posting this to see if I'm on track. I plan to use a TFileStream to read one row at a time. I need to be able to read as many of these formats as possible: 16-bit two's complement binary integer 32-bit two's complement binary integer 64-bit two's complement binary integer IEEE single precision floating-point For 32-bit two's complement, I'm thinking something like the code below. Changing to Int64 and Int16 should be straight forward. How can I read the IEEE? Am I on the right track? Any suggestions on this code, or how to elegantly extend it for all 4 data types above? Since my post-processing will be the same after reading this data, I guess I'll have to copy the matrix into a common format when done. I have no problem just having four procedures (one for each data type) like the one below, but perhaps there's an elegant way to use RTTI or buffers and then move()'s so that the same code works for all 4 datatypes? Thanks! type TRowData = array of Int32; procedure ReadMatrix; var Matrix: array of TRowData; NumberOfRows: Cardinal; NumberOfCols: Cardinal; CurRow: Integer; begin NumberOfRows := 20; // not known until run time NumberOfCols := 100; // not known until run time SetLength(Matrix, NumberOfRows); for CurRow := 0 to NumberOfRows do begin SetLength(Matrix[CurRow], NumberOfCols); FileStream.ReadBuffer(Matrix[CurRow], NumberOfCols * SizeOf(Int32)) ); end; end;

    Read the article

  • Coordinate system problem with the grid control

    - by Jason94
    In my WPF application im trying to visualize some temperature data. I have a list of temperatures for the 7 past days and want to make a point to point line diagram. My problem is with the different koordinatesystems and adjusting data to the grid. XAML: <Grid Height="167" HorizontalAlignment="Left" Margin="6,6,0,0" Name="grid1" VerticalAlignment="Bottom" Width="455" /> C# (draft): http://pastebin.com/6UWkMFj1 scale is a global variable that changes with a slider (1-10). How to i correct my application so the line always is centered? As it is now it starts out centeded but if i crank up the slider to 3-4 the line goes up and above the applicationwindow. I also would like to use the full height of the grid window not just a small piece like images below: http://img32.imageshack.us/i/002wtvu.jpg/ http://img691.imageshack.us/i/001tqco.jpg/ As you can see i have worked out my data so day 1 with temperature 62 F is lower then day 2 with temperature of 76 F but i have scaling issues and placementissues... could somebody straighten out my math? :-)

    Read the article

  • Javascript Anonymous Functions and Global Variables

    - by Jonathan Swift
    I thought I would try and be clever and create a Wait function of my own (I realise there are other ways to do this). So I wrote: var interval_id; var countdowntimer = 0; function Wait(wait_interval) { countdowntimer = wait_interval; interval_id = setInterval(function() { --countdowntimer <=0 ? clearInterval(interval_id) : null; }, 1000); do {} while (countdowntimer >= 0); } // Wait a bit: 5 secs Wait(5); This all works, except for the infinite looping. Upon inspection, if I take the While loop out, the anonymous function is entered 5 times, as expected. So clearly the global variable countdowntimer is decremented. However, if I check the value of countdowntimer, in the While loop, it never goes down. This is despite the fact that the anonymous function is being called whilst in the While loop! Clearly, somehow, there are two values of countdowntimer floating around, but why?

    Read the article

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • Send and receive data trough the power network

    - by luvieere
    I'm not interested in a hardware solution, I want to know about software that may "read" modulated signal received trough the power supply - some sort of a low-level driver that would access the power signal in a convenient place and demodulate it. Is there a way to receive signal from the computer's power supply? I'm interested in an API or library that would allow the computer to be seen as a node in a Power Line Communication network and receive data directly through the power cable, without the need for a converter. Is there any active research in this field? Edit: There is software that reads monitors and displays internal component voltages - DC voltage after being converted and filtered by the power supply - now I need is a method of data encoding that would be invariant to conversion and filtering, the original signal embedded in AC being present in some form within the converted DC signal.

    Read the article

  • pointer, malloc and char in C

    - by user2534078
    im trying to copy a const char array to some place in the memory and point to it . lets say im defining this var under the main prog : char *p = NULL; and sending it to a function with a string : myFunc(&p, "Hello"); now i want that at the end of this function the pointer will point to the letter H but if i puts() it, it will print Hello . here is what i tried to do : void myFunc(char** ptr , const char strng[] ) { *ptr=(char *) malloc(sizeof(strng)); char * tmp=*ptr; int i=0; while (1) { *ptr[i]=strng[i]; if (strng[i]=='\0') break; i++; } *ptr=tmp; } i know its a rubbish now, but i would like to understand how to do it right, my idea was to allocate the needed memory, copy a char and move forward with the pointer, etc.. also i tried to make the ptr argument byreferenec (like &ptr) but with no success due to a problem with the lvalue and rvalue . the only thing is changeable for me is the function, and i would like not to use strings, but chars as this is and exercise . thanks for any help in advance.

    Read the article

  • How to get drag working properly in silverlight when mouse is not pressed ?

    - by Mrt
    Hello, I have the following code xaml <Canvas x:Name="LayoutRoot" > <Rectangle Canvas.Left="40" Canvas.Top="40" Width="20" Height="20" Name="rec" Fill="Red" MouseLeftButtonDown="rec_MouseLeftButtonDown" MouseMove="rec_MouseMove" /> </Canvas> code behind public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); } public Point LastDragPosition { get; set; } private bool isDragging; private void rec_MouseMove(object sender, MouseEventArgs e) { if(!isDragging) { return; } var position = e.GetPosition(rec as UIElement); var newPosition = new Point( Canvas.GetLeft(rec) + position.X - LastDragPosition.X, Canvas.GetTop(rec) + position.Y - LastDragPosition.Y); Canvas.SetLeft(rec, newPosition.X); Canvas.SetTop(rec, newPosition.Y); LastDragPosition = e.GetPosition(rec as UIElement); } private void rec_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { isDragging = true; LastDragPosition = e.GetPosition(sender as UIElement); rec.CaptureMouse(); } } This issue is the rectangle follows the mouse if the mouse left button is down, but I would like the rectangle to move even when the mouse left button isn't down. It works, but if you move the mouse very slowly. If you move the mouse to quickly the rectangle stops moving (is the mouse capture lost ?) Cheers,

    Read the article

  • git submodule svn external

    - by Jason
    Let's say I have 3 git repositories, each with a lib and tests folder in the root. All 3 repositories are part of what I want to be a single package, however it is important to me to keep the repositories separate. I am new to git coming from svn, so I have been reading up on submodules and how they differ from svn:externals. In SVN I could have a single lib/vendor/package directory, and inside package I could setup 3 externals pointing to each of my 3 repositories lib directory, renaming it appropriately like lib/vendor/package/a -> repo1/lib lib/vendor/package/b -> repo2/lib lib/vendor/package/c -> repo3/lib but from my understanding this is not possible with git. Am I missing something? Really I'm hoping this can be solved in one of two ways. Someone will point out how to create a 4th git repository which has the other 3 as submodules organized as I have mentioned above (where I can have an a, b, and c folder inside the root) Someone will point out how to set this up using svn:externals in combination with githubs svn support, referencing the lib directory within each git repository (from my understanding this is impossible)

    Read the article

  • When using Direct3D, how much math is being done on the CPU?

    - by zirgen
    Context: I'm just starting out. I'm not even touching the Direct3D 11 API, and instead looking at understanding the pipeline, etc. From looking at documentation and information floating around the web, it seems like some calculations are being handled by the application. That, is, instead of simply presenting matrices to multiply to the GPU, the calculations are being done by a math library that operates on the CPU. I don't have any particular resources to point to, although I guess I can point to the XNA Math Library or the samples shipped in the February DX SDK. When you see code like mViewProj = mView * mProj;, that projection is being calculated on the CPU. Or am I wrong? If you were writing a program, where you can have 10 cubes on the screen, where you can move or rotate cubes, as well as viewpoint, what calculations would you do on the CPU? I think I would store the geometry for the a single cube, and then transform matrices representing the actual instances. And then it seems I would use the XNA math library, or another of my choosing, to transform each cube in model space. Then get the coordinates in world space. Then push the information to the GPU. That's quite a bit of calculation on the CPU. Am I wrong? Am I reaching conclusions based on too little information and understanding? What terms should I Google for, if the answer is STFW? Or if I am right, why aren't these calculations being pushed to the GPU as well?

    Read the article

  • How cast in c#/.net 3.5 works? for types with '?'

    - by Inez
    This is my code which works public decimal? Get() { var res = ... return res.Count() > 0 ? res.First() : (decimal?) null; } and this one doesn't work public decimal? Get() { var res = ... return res.Count() > 0 ? res.First() : null; } giving the compilator error: Error 1 Type of conditional expression cannot be determined because there is no implicit conversion between 'decimal' and '' I wonder why? any ideas?

    Read the article

  • Fastest way to convert file from latin1 to utf-8 in python.

    - by xsaero00
    I need fastest way to convert files from latin1 to utf-8 in python. The files are large ~ 2G. ( I am moving DB data ). So far I have import codecs infile = codecs.open(tmpfile, 'r', encoding='latin1') outfile = codecs.open(tmpfile1, 'w', encoding='utf-8') for line in infile: outfile.write(line) infile.close() outfile.close() but it is still slow. The conversion takes one fourth of the whole migration time. I could also use a linux command line utility if it is faster than native python code.

    Read the article

  • How to limit TCP writes to particular size and then block untlil the data is read

    - by ustulation
    {Qt 4.7.0 , VS 2010} I have a Server written in Qt and a 3rd party client executable. Qt based server uses QTcpServer and QTcpSocket facilities (non-blocking). Going through the articles on TCP I understand the following: the original implementation of TCP mentioned the negotiable window size to be a 16-bit value, thus maximum being 65535 bytes. But implementations often used the RFC window-scale-extension that allows the sliding window size to be scalable by bit-shifting to yield a maximum of 1 gigabyte. This is implementation defined. This could have resulted in majorly different window sizes on receiver and sender end as the server uses Qt facilities without hardcoding any window size limit. Client 1st asks for all information it can based on the previous messages from the server before handling the new (accumulating) incoming messages. So at some point Server receives a lot of messages each asking for data of several MB's. This the server processes and puts it into the sender buffer. Client however is unable to handle the messages at the same pace and it seems that client’s receiver buffer is far smaller (65535 bytes maybe) than sender’s transmit window size. The messages thus get accumulated at sender’s end until the sender’s buffer is full too after which the TCP writes on sender would block. This however does not happen as sender buffer is much larger. Hence this manifests as increase in memory consumption on the sender’s end. To prevent this from happening, I used Qt’s socket’s waitForBytesWritten() with timeout set to -1 for infinite waiting period. This as I see from the behaviour blocks the thread writing TCP data until the data has actually been sensed by the receiver’s window (which will happen when earlier messages have been processed by the client at application level). This has caused memory consumption at Server end to be almost negligible. is there a better alternative to this (in Qt) if i want to restrict the memory consumption at server end to say x MB's? Also please point out if any of my understandings is incorrect.

    Read the article

  • How to use the boost lexical_cast library for just for checking input

    - by Inverse
    I use the boost lexical_cast library for parsing text data into numeric values quite often. In several situations however, I only need to check if values are numeric; I don't actually need or use the conversion. So, I was thinking about writing a simple function to test if a string is a double: template<typename T> bool is_double(const T& s) { try { boost::lexical_cast<double>(s); return true; } catch (...) { return false; } } My question is, are there any optimizing compilers that would drop out the lexical_cast here since I never actually use the value? Is there a better technique to use the lexical_cast library to perform input checking?

    Read the article

  • How to draw shadows that don't suck?

    - by mystify
    A CAShapeLayer uses a CGPathRef to draw it's stuff. So I have a star path, and I want a smooth drop shadow with a radius of about 15 units. Probably there is some nice functionality in some new iPhone OS versions, but I need to do it myself for a old aged version of 3.0 (which most people still use). I tried to do some REALLY nasty stuff: I created a for-loop and sequentially created like 15 of those paths, transform-scaling them step by step to become bigger. Then assigning them to a new created CAShapeLayer and decreasing it's alpha a little bit on every iteration. Not only that this scaling is mathematically incorrect and sucks (it should happen relative to the outline!), the shadow is not rounded and looks really ugly. That's why nice soft shadows have a radius. The tips of a star shouldn't appear totally sharp after a shadow size of 15 units. They should be soft like cream. But in my ugly solution they're just as s harp as the star itself, since all I do is scale the star 15 times and decrease it's alpha 15 times. Ugly. I wonder how the big guys do it? If you had an arbitrary path, and that path must throw a shadow, how does the algorithm to do that work? Probably the path would have to be expanded like 30 times, point-by-point relative to the tangent of the outline away from the filled part, and just by 0.5 units to have a nice blending. Before I re-invent the wheel, maybe someone has a handy example or link?

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >