Search Results

Search found 5267 results on 211 pages for 'use cases'.

Page 152/211 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Raytraced Shadows Problem

    - by Mat
    Hey There! I've got a problem with shadowrays in my raytracer. Please have a look at the following two pictures 3D sMax: My Raytracer: The scene is lit by a very bright light, shining from the back. It's so bright that there is no gradient in the shading, just either white or dark (due to the overexposure). both images were rendered using 3DStudioMax and both use the exact same geometry, just in one case the normals are interpolated across the triangles. Now consider the red dot on the surface. In the case of the unsmoothed version, it lies in a dark area. this means that the light source is not visible from this triangle, since it's facing away from it. In the smoothed version however, it lies in the lit area, because the interpolated normal would suggest, that the light would be visible at that point (although the actual geometry of the triangle is facing away from the lightsource). My problem now is when raytraced shadows come in. if a shadowray is shot into the scene, from the red dot, to test whether the light-source is visible or not (to determine shadowing), the shadowray will return an intersection, independent of whether normals are interpolated or not (because intersections only depend on the geometry). Therefore the pixel would be shaded dark. 3dsamx is handling the case correctly - the rendered image was generated with Raytraced shadows turned on. However, my own Raytracer runs exactly into this problem when i turn on raytraced shadows (in my raytracer, the point is dark in both cases, because raytraced shadows determine the point lying in the shadow), and i don't know how to solve it. I hope someone knows this problem and how to deal with it.. thanks!

    Read the article

  • How do I get 3 lines of text from a paragraph

    - by Keltex
    I'm trying to create an "snippet" from a paragraph. I have a long paragraph of text with a word hilighted in the middle. I want to get the line containing the word before that line and the line after that line. I have the following piece of information: The text (in a string) The lines are deliminated by a NEWLINE character \n I have the index into the string of the text I want to hilight A couple other criteria: If my word falls on first line of the paragraph, it should show the 1st 3 lines If my word falls on the last line of the paragraph, it should show the last 3 lines Should show the entire paragraph in the degenative cases (the paragraph only has 1 or 2 lines) Here's an example: This is the 1st line of CAT text in the paragraph This is the 2nd line of BIRD text in the paragraph This is the 3rd line of MOUSE text in the paragraph This is the 4th line of DOG text in the paragraph This is the 5th line of RABBIT text in the paragraph Example, if my index points to BIRD, it should show lines 1, 2, & 3 as one complete string like this: This is the 1st line of CAT text in the paragraph This is the 2nd line of BIRD text in the paragraph This is the 3rd line of MOUSE text in the paragraph If my index points to DOG, it should show lines 3, 4, & 5 as one complete string like this: This is the 3rd line of MOUSE text in the paragraph This is the 4th line of DOG text in the paragraph This is the 5th line of RABBIT text in the paragraph etc. Anybody want to help tackle this?

    Read the article

  • RESTful idempotence

    - by DutrowLLC
    I'm designing a RESTful web service utilizing ROA(Resource oriented architecture). I'm trying to work out an efficient way to guarantee idempotence for PUT requests that create new resources in cases that the server designates the resource key. From my understanding, the traditional approach is to create a type of transaction resource such as /CREATE_PERSON. The the client-server interaction for creating a new person resource would be in two parts: Step 1: Get unique transaction id for creating the new PERSON resource::: **Client request:** GET /CREATE_PERSON **Server response:** 200 OK transaction-id:"as8yfasiob" Step 2: Create the new person resource in a request guaranteed to be unique by using the transaction id::: **Client request** PUT /CREATE_PERSON/{transaction_id} first_name="Big bubba" **Server response** 201 Created // (If the request is a duplicate, it would send this PersonKey="398u4nsdf" // same response without creating a new resource. It // would perhaps send an error response if the was used // on a transaction id non-duplicate request, but I have // control over the client, so I can guarantee that this // won't happen) The problem that I see with this approach is that it requires sending two requests to the server in order to do to single operation of creating a new PERSON resource. This creates a performance issues increasing the chance that the user will be waiting around for the client to complete their request. I've been trying to hash out ideas for eliminating the first step such as pre-sending transaction-id's with each request, but most of my ideas have other issues or involve sacrificing the statelessness of the application. Is there a way to do this?

    Read the article

  • Take screenshot with Selenium: WaitForPageToLoad does not wait long enough

    - by OregonGhost
    I'm trying to get screenshots from a web page with multiple browsers. Just experimenting with Selenium RC, I wrote code like this: var sel = new DefaultSelenium(server, 4444, target, url); sel.Start(); sel.Open(url); sel.WaitForPageToLoad("30000"); var imageString = sel.CaptureScreenshotToString(); This basically works, but in most cases the screenshot is of a blank browser window, because the page is not yet ready for display. It kind of works if I add a sleep just after the WaitForPageToLoad, but that slows down the fast browsers and/or may be to short for the slower browsers (or under load). A typical solution for this seems to be to wait for the presence of a certain element. However, this is meant as a simple generic solution to get a screenshot of a local web page with as many browsers as possible (to test the layout) and I don't want to have to enter certain element names or whatever. It's a simple tool where you just enter the Selenium Server URL and the URL you want to test, and get the screenshots back. Any advice?

    Read the article

  • Regex: Use start of line/end of line signs (^ or $) in different context

    - by fgysin
    While doing some small regex task I came upon this problem. I have a string that is a list of tags that looks e.g like this: foo,bar,qux,garp,wobble,thud What I needed to do was to check if a certain tag, e.g. 'garp' was in this list. (What it finally matches is not really important, just if there is a match or not.) My first and a bit stupid try at this was to use the following regex: [^,]garp[,$] My idea was that before 'garp' there should either be the start of the line/string or a comma, after 'garp' there should be either a comma or the end of the line/string. Now, it is instantly obvious that this regex is wrong: Both ^ and $ change their behaviour in the context of the character class [ ]. What I finally came up with is the following: ^garp$|^garp,|,garp,|,garp$ This regex just handles the 4 cases one by one. (Tag at beginning of list, in the center, at the end, or as the only element of the list.) The last regex is somehow a bit ugly in my eyes and just for funs sake I'd like to make it a bit more elegant. Is there a way how the start of line/end of line characters (^ and $) can be used in the context of character classes?

    Read the article

  • Check if files in a directory are still being written in Windows Batch File

    - by FMFF
    Hello. Here's my batch file to parse a directory, and zip files of certain type REM Begin ------------------------ tasklist /FI "IMAGENAME eq 7za.exe" /FO CSV > search.log FOR /F %%A IN (search.log) DO IF %%~zA EQU 0 GOTO end for /f "delims=" %%A in ('dir C:\Temp\*.ps /b') do ( "C:\Program Files\7-Zip\cmdline\7za.exe" a -tzip -mx9 "C:\temp\Zip\%%A.zip" "C:\temp\%%A" Move "C:\temp\%%A" "C:\Temp\Archive" ) :end del search.log REM pause exit REM End --------------------------- This code works just fine for 90% of my needs. It will be deployed as a scheduled task. However, the *.ps files are rather large (minimum of 1GB) in real time cases. So the code is supposed to check if the incoming file is completely written and is not locked by the application that is writing it. I saw another example elsewhere, that suggested the following approach :TestFile ren c:\file.txt c:\file.txt if errorlevel 0 goto docopy sleep 5 goto TestFile :docopy However this example is good for a fixed file. How can I use that many labels and GoTo's inside a for loop without causing an infinite loop? Or is this code safe to be used in the For Loop? Thank you for any help.

    Read the article

  • How do I manage application configuration in ASP.NET?

    - by GlennS
    I am having difficulty with managing configuration of an ASP.Net application to deploy for different clients. The sheer volume of different settings which need twiddling takes up large amounts of time, and the current configuration methods are too complicated to enable us to push this responsibility out to support partners. Any suggestions for better methods to handle this or good sources of information to research? How we do things at present: Various xml configuration files which are referenced in Web.Config, for example an AppSettings.xml. Configurations for specific sites are kept in duplicate configuration files. Text files containing lists of data specific to the site In some cases, manual one-off changes to the database C# configuration for Windsor IOC. The specific issues we are having: Different sites with different features enabled, different external services we have to talk to and different business rules. Different deployment types (live, test, training) Configuration keys change across versions (get added, remove), meaning we have to update all the duplicate files We still need to be able to alter keys while the application is running Our current thoughts on how we might approach this are: Move the configuration into dynamically compiled code (possibly Boo, Binsor or JavaScript) Have some form of diffing/merging configuration: combine a default config with a live/test/training config and a site-specific config

    Read the article

  • Automatic Adjusting Range Table

    - by Bradford
    I have a table with a start date range, an end date range, and a few other additional columns. On input of a new record, I want to automatically adjust any overlapping date ranges (shrinking them to allow for the new input). I also want to ensure that no overlapping records can accidentally be inserted into this table. I'm using Oracle and Java for my application code. How should I enforce the prevention of overlapping date ranges and also allow for automatically adjusting overlapping ranges? Should I create an AFTER INSERT trigger, with a dbms_lock to serialize access, to prevent the overlapping data. Then in Java, apply the logic to auto adjust everything? Or should that part be in PL/SQL in stored procedure call? This is something that we need for a couple other tables so it'd be nice to abstract. If anyone has something like this already written, please share :) I did find this reference: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:474221407101 Here's an example of how each of the 4 overlapping cases should be handled for adjustment on insert: = Example 1 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 50, 'A') Gives (0, 10, 'X') **(20, 50, 'A') **(51, 100, 'Z') (200, 500, 'Y') = Example 2 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (40, 80, 'A') Gives (0, 10, 'X') **(30, 39, 'Z') **(40, 80, 'A') **(81, 100, 'Z') (200, 500, 'Y') = Example 3 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (50, 120, 'A') Gives (0, 10, 'X') **(30, 49, 'Z') **(50, 120, 'A') (200, 500, 'Y') = Example 4 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 120, 'A') Gives (0, 10, 'X') **(20, 120, 'A') (200, 500, 'Y') The algorithm is as follows: given range = g; input range = i; output range set = o if i.start <= g.start if i.end >= g.end o_1 = i else o_1 = i o_2 = (o.end + 1, g.end) else if i.end >= g.end o_1 = (g.start, i.start - 1) o_2 = i else o_1 = (g.start, i.start - 1) o_2 = i o_3 = (i.end + 1, i.end)

    Read the article

  • No Commons Logging in Android?

    - by Joe Boese
    Hello all, I have a pretty big library I developed specifically for use in my Android Application. However business logic itself has no dependency on Android. To preserve that, I used Commons Logging throughout this library and it's respective JUnit tests (which I run in Eclipse). However now that I am starting to integrate it into an Activity which I launch on Android, I am unable to get my logging to work. In Eclipse/JUnit, I had simply pulled in log4j's jar file as well as a log4j.properties file. This doesn't seem to work when deploying to a device. After struggling with attempting to get that to work for several hours, I gave up and tried replacing all my commons logging stuff with android.util.Log. Now I can log on the device.. but all JUnit tests are broken. When any JUnit tries to log using android.util.Log, it throws a RuntimeException 'Stub!'. I would prefer to revert to my commons logging approach.. if anyone can help with that.. otherwise.. what can I do to get my JUnit test cases running using 'android.util.Log'? Many thanks in advance.. I've spent more than a few hours on this and I'd like to move on to writing real code again! Joe

    Read the article

  • How does 'lazy' work?

    - by Matt Fenwick
    What is the difference between these two functions? I see that lazy is intended to be lazy, but I don't understand how that is accomplished. -- | Identity function. id :: a -> a id x = x -- | The call '(lazy e)' means the same as 'e', but 'lazy' has a -- magical strictness property: it is lazy in its first argument, -- even though its semantics is strict. lazy :: a -> a lazy x = x -- Implementation note: its strictness and unfolding are over-ridden -- by the definition in MkId.lhs; in both cases to nothing at all. -- That way, 'lazy' does not get inlined, and the strictness analyser -- sees it as lazy. Then the worker/wrapper phase inlines it. -- Result: happiness Tracking down the note in MkId.lhs (hopefully this is the right note and version, sorry if it's not): Note [lazyId magic] ~~~~~~~~~~~~~~~~~~~ lazy :: forall a?. a? -> a? (i.e. works for unboxed types too) Used to lazify pseq: pseq a b = a `seq` lazy b Also, no strictness: by being a built-in Id, all the info about lazyId comes from here, not from GHC.Base.hi. This is important, because the strictness analyser will spot it as strict! Also no unfolding in lazyId: it gets "inlined" by a HACK in CorePrep. It's very important to do this inlining after unfoldings are exposed in the interface file. Otherwise, the unfolding for (say) pseq in the interface file will not mention 'lazy', so if we inline 'pseq' we'll totally miss the very thing that 'lazy' was there for in the first place. See Trac #3259 for a real world example. lazyId is defined in GHC.Base, so we don't have to inline it. If it appears un-applied, we'll end up just calling it. I don't understand that because it refers to lazyId instead of lazy. How does lazy work?

    Read the article

  • Keyword to SQL search

    - by jdelator
    Use Case When a user goes to my website, they will be confronted with a search box much like SO. They can search for results using plan text. ".net questions", "closed questions", ".net and java", etc.. The search will function a bit different that SO, in that it will try to as much as possible of the schema of the database rather than a straight fulltext search. So ".net questions" will only search for .net questions as opposed to .net answers (probably not applicable to SO case, just an example here), "closed questions" will return questions that are closed, ".net and java" questions will return questions that relate to .net and java and nothing else. Problem I'm not too familiar with the words but I basically want to do a keyword to SQL driven search. I know the schema of the database and I also can datamine the database. I want to know any current approaches there that existing out already before I try to implement this. I guess this question is for what is a good design for the stated problem. Proposed My proposed solution so far looks something like this Clean the input. Just remove any special characters Parse the input into chunks of data. Break an input of "c# java" into c# and java Also handle the special cases like "'c# java' questions" into 'c# java' and "questions". Build a tree out of the input Bind the data into metadata. So convert stuff like closed questions and relate it to the isclosed column of a table. Convert the tree into a sql query. Thoughts/suggestions/links?

    Read the article

  • Python interface to PayPal - urllib.urlencode non-ASCII characters failing

    - by krys
    I am trying to implement PayPal IPN functionality. The basic protocol is as such: The client is redirected from my site to PayPal's site to complete payment. He logs into his account, authorizes payment. PayPal calls a page on my server passing in details as POST. Details include a person's name, address, and payment info etc. I need to call a URL on PayPal's site internally from my processing page passing back all the params that were passed in abovem and an additional one called 'cmd' with a value of '_notify-validate'. When I try to urllib.urlencode the params which PayPal has sent to me, I get a: While calling send_response_to_paypal. Traceback (most recent call last): File "<snip>/account/paypal/views.py", line 108, in process_paypal_ipn verify_result = send_response_to_paypal(params) File "<snip>/account/paypal/views.py", line 41, in send_response_to_paypal params = urllib.urlencode(params) File "/usr/local/lib/python2.6/urllib.py", line 1261, in urlencode v = quote_plus(str(v)) UnicodeEncodeError: 'ascii' codec can't encode character u'\ufffd' in position 9: ordinal not in range(128) I understand that urlencode does ASCII encoding, and in certain cases, a user's contact info can contain non-ASCII characters. This is understandable. My question is, how do I encode non-ASCII characters for POSTing to a URL using urllib2.urlopen(req) (or other method) Details: I read the params in PayPal's original request as follows (the GET is for testing): def read_ipn_params(request): if request.POST: params= request.POST.copy() if "ipn_auth" in request.GET: params["ipn_auth"]=request.GET["ipn_auth"] return params else: return request.GET.copy() The code I use for sending back the request to PayPal from the processing page is: def send_response_to_paypal(params): params['cmd']='_notify-validate' params = urllib.urlencode(params) req = urllib2.Request(PAYPAL_API_WEBSITE, params) req.add_header("Content-type", "application/x-www-form-urlencoded") response = urllib2.urlopen(req) status = response.read() if not status == "VERIFIED": logging.warn("PayPal cannot verify IPN responses: " + status) return False return True Obviously, the problem only arises if someone's name or address or other field used for the PayPal payment does not fall into the ASCII range.

    Read the article

  • Parameterized Queries /Without/ using queries.

    - by Aren B
    I've got a bit of a poor situation here. I'm stuck working with commerce server, which doesn't do a whole lot of sanitization/parameterization. I'm trying to build up my queries to prevent SQL Injection, however some things like the searches / where clause on the search object need to be built up, and there's no parameterized interface. Basically, I cannot parameterize, however I was hoping to be able to use the same engine to BUILD my query text if possible. Is there a way to do this, aside from writing my own parameterizing engine which will probably still not be as good as parameterized queries? Update: Example The where clause has to be built up as a sql query where clause essentially: CatalogSearch search = /// Create Search object from commerce server search.WhereClause = string.Format("[cy_list_price] > {0} AND [Hide] is not NULL AND [DateOfIntroduction] BETWEEN '{1}' AND '{2}'", 12.99m, DateTime.Now.AddDays(-2), DateTime.Now); *Above Example is how you refine the search, however we've done some testing, this string is NOT SANITIZED. This is where my problem lies, because any of those inputs in the .Format could be user input, and while i can clean up my input from text-boxes easily, I'm going to miss edge cases, it's just the nature of things. I do not have the option here to use a parameterized query because Commerce Server has some insane backwards logic in how it handles the extensible set of fields (schema) & the free-text search words are pre-compiled somewhere. This means I cannot go directly to the sql tables What i'd /love/ to see is something along the lines of: SqlCommand cmd = new SqlCommand("[cy_list_price] > @MinPrice AND [DateOfIntroduction] BETWEEN @StartDate AND @EndDate"); cmd.Parameters.AddWithValue("@MinPrice", 12.99m); cmd.Parameters.AddWithValue("@StartDate", DateTime.Now.AddDays(-2)); cmd.Parameters.AddWithValue("@EndDate", DateTime.Now); CatalogSearch search = /// constructor search.WhereClause = cmd.ToSqlString();

    Read the article

  • LINQ to SQL - Tracking New / Dirty Objects

    - by Joseph Sturtevant
    Is there a way to determine if a LINQ object has not yet been inserted in the database (new) or has been changed since the last update (dirty)? I plan on binding my UI to LINQ objects (using WPF) and need it to behave differently depending whether or not the object is already in the database. MyDataContext context = new MyDataContext(); MyObject obj; if (new Random().NextDouble() > .5) obj = new MyObject(); else obj = context.MyObjects.First(); // How can I distinguish these two cases? The only simple solution I can think of is to set the primary key of new records to a negative value (my PKs are an identity field and will therefore be set to a positive integer on INSERT). This will only work for detecting new records. It also requires identity PKs, and requires control of the code creating the new object. Is there a better way to do this? It seems like LINQ must be internally tracking the status of these objects so that it can know what to do on context.SubmitChanges(). Is there some way to access that "object status"? Clarification Apparently my initial question was confusing. I'm not looking for a way to insert or update records. I'm looking for a way, given any LINQ object, to determine if that object has not been inserted (new) or has been changed since its last update (dirty).

    Read the article

  • How to automatically expand html select element in javascript

    - by xan
    I have a (hidden) html select object in my menu attached to a menu button link, so that clicking the link shows the list so you can pick from it. When you click the button, it calls some javascript to show the <select>. Clicking away from the <select> hides the list. What I really want is to make the <select> appear fully expanded, as if you had clicked on the "down" arrow, but I can't get this working. I've tried lots of different approaches, but can't make any headway. What I'm doing currently is this: <li> <a href="javascript:showlist();"><img src="/images/icons/add.png"/>Add favourite</a> <select id="list" style="display:none; onblur="javascript:cancellist()"> </select> </li> // in code function showlist() { //using prototype not jQuery $('list').show(); // shows the select list $('list').focus(); // sets focus so that when you click away it calles onblur() } I've tried calling $('list').click(). I've tried setting onfocus="this.click()" But in both cases I'm getting Uncaught TypeError: Object # has no method 'click' which is peculiar as link text says that it supports the standard functions. I've tried setting the .size = .length which works, but doesn't have the same appearance (as when you click to open the element, it floats over the rest of the page.) Does anyone have any suggestions?

    Read the article

  • Simple Select Statement on MySQL Database Hanging

    - by AlishahNovin
    I have a very simple sql select statement on a very large table, that is non-normalized. (Not my design at all, I'm just trying to optimize while simultaneously trying to convince the owners of a redesign) Basically, the statement is like this: SELECT FirstName, LastName, FullName, State FROM Activity Where (FirstName=@name OR LastName=@name OR FullName=@name) AND State=@state; Now, FirstName, LastName, FullName and State are all indexed as BTrees, but without prefix - the whole column is indexed. State column is a 2 letter state code. What I'm finding is this: When @name = 'John Smith', and @state = '%' the search is really fast and yields results immediately. When @name = 'John Smith', and @state = 'FL' the search takes 5 minutes (and usually this means the web service times out...) When I remove the FirstName and LastName comparisons, and only use the FullName and State, both cases above work very quickly. When I replace FirstName, LastName, FullName, and State searches, but use LIKE for each search, it works fast for @name='John Smith%' and @state='%', but slow for @name='John Smith%' and @state='FL' When I search against 'John Sm%' and @state='FL' the search finds results immediately When I search against 'John Smi%' and @state='FL' the search takes 5 minutes. Now, just to reiterate - the table is not normalized. The John Smith appears many many times, as do many other users, because there is no reference to some form of users/people table. I'm not sure how many times a single user may appear, but the table itself has 90 Million records. Again, not my design... What I'm wondering is - though there are many many problems with this design, what is causing this specific problem. My guess is that the index trees are just too large that it just takes a very long time traversing the them. (FirstName, LastName, FullName) Anyway, I appreciate anyone's help with this. Like I said, I'm working on convincing them of a redesign, but in the meantime, if I someone could help me figure out what the exact problem is, that'd be fantastic.

    Read the article

  • UCMA 3.0 How to build a list of recipients and then broadcast an IM call to those recipients

    - by ficuscr
    I am developing an application using UCMA 3.0 that will run as a service and send out periodic 'broadcasts' in the form of Instant Message calls. I have been using the book "Professional Unified Communications Development with Microsoft Lync Server 2010" and have everything provisioned fine and am able to establish an application endpoint. I am stuck on two aspects though. 1) How to get a list of all users of Lync? Everything the UCMA can do is centered on a single user. For example it allows me to retrieve all contacts/groups present on a given users 'contact list' but does not provide any means to query for a list of available contacts that could be added to one of those contact lists. On the MSDN forum I found this post which leads me to think my best bet is simply to query AD directly. 2) What is the best way to actually send a broadcast style IM? My working premise is to attempt something like what I've found in this code example (specifically the public void SendIM() method). So, get a list of recipients from AD, (looping on each on to check current presence?), and then use Automation to make the IM call for each recipient in the collection. Does that make sense? Do I need to check presence of the recipient or do I just optimistically make the IM calls irregardless of their current presence status? Can anyone point me to some working code demonstrating sending an IM broadcast? You would think this is probably one of the most common use cases however the SDK samples do not cover it. Thanks in advance.

    Read the article

  • UTF-8 BOM signature in PHP files

    - by skidding
    I was writing some commented PHP classes and I stumbled upon a problem. My name (for the @author tag) ends up with a ? (which is a UTF-8 character, ...and a strange name, I know). Even though I save the file as UTF-8, some friends reported that they see that character totally messed up (È™). This problem goes away by adding the BOM signature. But that thing troubles me a bit, since I don't know that much about it, except from what I saw on Wikipedia and on some other similar questions here on SO. I know that it adds some things at the beginning of the file, and from what I understood it's not that bad, but I'm concerned because the only problematic scenarios I read about involved PHP files. And since I'm writing PHP classes to share them, being 100% compatible is more important than having my name in the comments. But I'm trying to understand the implications, should I use it without worrying? or are there cases when it might cause damage? When? Thanks!

    Read the article

  • REST application, Transactions, Cache drop

    - by Julian Davchev
    Hi, I am building REST API in php with memcache layer on top for caching all resources. After some reading experience it turns out it's best when documents are as simple as posible...mainly due to dropping cache sequences. So if there is 'building','room' entities for the 'room' document I would only place the id of the 'building' and not the whole data of it. Then on api client side I would merge data as needed. Problem becomes when I need to update/insert (most cases more than one table). I update one resource but on second update system fails or whatever and there becomes database inconsistancies. I see several solutions: 1. Implement rest transactions which I find wrong and complex as idea is to be stateless and easy. 2. On update/insert actions I pass more complex data (not single entities) so I can force transactions on API level. But this will make it weird that your GET document structure is same as PUT document structure. And again somehow make drop sequences complex. Any pointers are more than welcome. Cheers,

    Read the article

  • Minimal-change algorithm which maximises 'swapping'

    - by Kim Bastin
    This is a question on combinatorics from a non-mathematician, so please try to bear with me! Given an array of n distinct characters, I want to generate subsets of k characters in a minimal-change order, i.e. an order in which generation n+1 contains exactly one character that was not in generation n. That's not too hard in itself. However, I also want to maximise the number of cases in which the character that is swapped out in generation n+1 is the same character that was swapped in in generation n. To illustrate, for n=7, k=3: abc abd abe* abf* abg* afg aeg* adg* acg* acd ace* acf* aef adf* ade bde bdf bef bcf* bce bcd* bcg* bdg beg* bfg* cfg ceg* cdg* cde cdf* cef def deg dfg efg The asterisked strings indicate the case I want to maximise; e.g. the e that is new in generation 3, abe, replaces a d that was new in generation 2, abd. It doesn't seem possible to have this happen in every generation, but I want it to happen as often as possible. Typical array sizes that I use are 20-30 and subset sizes around 5-8. I'm using an odd language, Icon (or actually its derivative Unicon), so I don't expect anyone to post code that I can used directly. But I will be grateful for answers or hints in pseudo-code, and will do my best to translate C etc. Also, I have noticed that problems of this kind are often discussed in terms of arrays of integers, and I can certainly apply solutions posted in such terms to my own problem. Thanks Kim Bastin

    Read the article

  • How to create a variadic (with variable length argument list) function wrapper in JavaScript

    - by U-D13
    The intention is to build a wrapper to provide a consistent method of calling native functions with variable arity on various script hosts - so that the script could be executed in a browser as well as in the Windows Script Host or other script engines. I am aware of 3 methods of which each one has its own drawbacks. eval() method: function wrapper () { var str = ''; for (var i=0; i<arguments.lenght; i++) str += (str ?', ':'') + ',arguments['+i+']'; return eval('[native_function] ('+str+')'); } switch() method: function wrapper () { switch (arguments.lenght) { case 0: return [native_function] (arguments[0]); break; case 1: return [native_function] (arguments[0], arguments[1]); break; ... case n: return [native_function] (arguments[0], arguments[1], ... arguments[n]); } } apply() method: function wrapper () { return [native_function].apply([native_function_namespace], arguments); } What's wrong with them you ask? Well, shall we delve into all the reasons why eval() is evil? And also all the string concatenation... Not a solution to be labeled "elegant". One can never know the maximum n and thus how many cases to prepare. This also would strech the script to immense proportions and sin against the holy DRY principle. The script could get executed on older (pre- JavaScript 1.3 / ECMA-262-3) engines that don't support the apply() method. Now the question part: is there any another solution out there?

    Read the article

  • Unit Testing a Java Chat Application

    - by Epitaph
    I have developed a basic Chat application in Java. It consists of a server and multiple client. The server continually monitors for incoming messages and broadcasts them to all the clients. The client is made up of a Swing GUI with a text area (for messages sent by the server and other clients), a text field (to send Text messages) and a button (SEND). The client also continually monitors for incoming messages from other clients (via the Server). This is achieved with Threads and Event Listeners and the application works as expected. But, how do I go about unit testing my chat application? As the methods involve establishing a connection with the server and sending/receiving messages from the server, I am not sure if these methods should be unit tested. As per my understanding, Unit Testing shouldn't be done for tasks like connecting to a database or network. The few test cases that I could come up with are: 1) The max limit of the text field 2) Client can connect to the Server 3) Server can connect to the Client 4) Client can send message 5) Client can receive message 6) Server can send message 7) Server can receive message 8) Server can accept connections from multiple clients But, since most of the above methods involve some kind of network communication, I cannot perform unit testing. How should I go about unit testing my chat application?

    Read the article

  • RESTful API design question - how should one allow users to create new resource instances?

    - by Tamás
    I'm working in a research group where we intend to publish implementations of some of the algorithms we develop on the web via a RESTful API. Most of these algorithms work on small to medium size datasets, and in many cases, a user of our services might want to run multiple queries (with different parameters) on the same dataset, so for me it seems reasonable to allow users to upload their datasets in advance and refer to them in their queries later. In this sense, a dataset could be a resource in my API, and an algorithm could be another. My question is: how should I let the users upload their own datasets? I cannot simply let users upload their data to /dataset/dataset_id as letting the users invent their own dataset_ids might result in ID collision and users overwriting each other's datasets by accident. (I believe one of the most frequently used dataset ID would be test). I think an ideal way would be to have a dedicated URL (like /dataset/upload) where users can POST their datasets and the response would contain a unique ID under which the dataset was stored, but I'm not sure that it does not violate the basic principles of REST. What is the preferred way of dealing with such scenarios?

    Read the article

  • Oracle sqlldr: column not allowed here

    - by Wade Williams
    Can anyone spot the error in this attempted data load? The '\\N' is because this is an import of an OUTFILE dump from mysql, which puts \N for NULL fields. The decode is to catch cases where the field might be an empty string, or might have \N. Using Oracle 10g on Linux. load data infile objects.txt discardfile objects.dsc truncate into table objects fields terminated by x'1F' optionally enclosed by '"' (ID INTEGER EXTERNAL NULLIF (ID='\\N'), TITLE CHAR(128) NULLIF (TITLE='\\N'), PRIORITY CHAR(16) "decode(:PRIORITY, BLANKS, NULL, '\\N', NULL)", STATUS CHAR(64) "decode(:STATUS, BLANKS, NULL, '\\N', NULL)", ORIG_DATE DATE "YYYY-MM-DD HH:MM:SS" NULLIF (ORIG_DATE='\\N'), LASTMOD DATE "YYYY-MM-DD HH:MM:SS" NULLIF (LASTMOD='\\N'), SUBMITTER CHAR(128) NULLIF (SUBMITTER='\\N'), DEVELOPER CHAR(128) NULLIF (DEVELOPER='\\N'), ARCHIVE CHAR(4000) NULLIF (ARCHIVE='\\N'), SEVERITY CHAR(64) "decode(:SEVERITY, BLANKS, NULL, '\\N', NULL)", VALUED CHAR(4000) NULLIF (VALUED='\\N'), SRD DATE "YYYY-MM-DD" NULLIF (SRD='\\N'), TAG CHAR(64) NULLIF (TAG='\\N') ) Sample Data (record 1). The ^_ represents the unprintable 0x1F delimiter. 1987^_Component 1987^_\N^_Done^_2002-10-16 01:51:44^_2002-10-16 01:51:44^_import^_badger^_N^_^_N^_0000-00-00^_none Error: Record 1: Rejected - Error on table objects, column SEVERITY. ORA-00984: column not allowed here

    Read the article

  • How can I work around "Xcode could not locate source file"

    - by Septih
    Hello, I'm working with the ID3 framework in Xcode (which has since disappeared off the face of the web - including google cache!). I'm testing out an import mp3 feature which allows them to edit the tags as they import them. One of the test cases is a corrupt or invalid mp3 with no proper id3 header. The problem I'm having is that when updating the tags of the invalid mp3 (updateFile:), the ID3 framework attempts to use id3V1Tag.m (I assume it falls back to this if it can't find the v2 tag) and this is where I get the Xcode error (whilst running the program, not building): Xcode could not locate source file: id3V1Tag.m (line: 299) Even in a release build this crashes the program, so it's not something I can really ignore. I've tried putting a try/catch block around it but it's not treated as an exception so doesn't get caught. The function to load the tag data for the file returns a BOOL but it appears this only returns false if the given file doesn't exist, so this doesn't help either. Current code: [tagData release]; tagData = [[TagAPI alloc] initWithGenreList:nil]; tagsLoaded = [tagData examineFile:exportPath]; if(tagsLoaded) { [tagData setTitle:title]; [tagData setArtist:artist]; [tagData setComments:comments]; @try { [tagData updateFile]; } @catch (id e){ NSLog(@"h"); } }

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >