Search Results

Search found 16560 results on 663 pages for 'far'.

Page 602/663 | < Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >

  • Is Java assert broken?

    - by BlairHippo
    While poking around the questions, I recently discovered the assert keyword in Java. At first, I was excited. Something useful I didn't already know! A more efficient way for me to check the validity of input parameters! Yay learning! But then I took a closer look, and my enthusiasm was not so much "tempered" as "snuffed-out completely" by one simple fact: you can turn assertions off.* This sounds like a nightmare. If I'm asserting that I don't want the code to keep going if the input listOfStuff is null, why on earth would I want that assertion ignored? It sounds like if I'm debugging a piece of production code and suspect that listOfStuff may have been erroneously passed a null but don't see any logfile evidence of that assertion being triggered, I can't trust that listOfStuff actually got sent a valid value; I also have to account for the possibility that assertions may have been turned off entirely. And this assumes that I'm the one debugging the code. Somebody unfamiliar with assertions might see that and assume (quite reasonably) that if the assertion message doesn't appear in the log, listOfStuff couldn't be the problem. If your first encounter with assert was in the wild, would it even occur to you that it could be turned-off entirely? It's not like there's a command-line option that lets you disable try/catch blocks, after all. All of which brings me to my question (and this is a question, not an excuse for a rant! I promise!): What am I missing? Is there some nuance that renders Java's implementation of assert far more useful than I'm giving it credit for? Is the ability to enable/disable it from the command line actually incredibly valuable in some contexts? Am I misconceptualizing it somehow when I envision using it in production code in lieu of statements like if (listOfStuff == null) barf();? I just feel like there's something important here that I'm not getting. *Okay, technically speaking, they're actually off by default; you have to go out of your way to turn them on. But still, you can knock them out entirely.

    Read the article

  • difference equations in MATLAB - why the need to switch signs?

    - by jefflovejapan
    Perhaps this is more of a math question than a MATLAB one, not really sure. I'm using MATLAB to compute an economic model - the New Hybrid ISLM model - and there's a confusing step where the author switches the sign of the solution. First, the author declares symbolic variables and sets up a system of difference equations. Note that the suffixes "a" and "2t" both mean "time t+1", "2a" means "time t+2" and "t" means "time t": %% --------------------------[2] MODEL proc-----------------------------%% % Define endogenous vars ('a' denotes t+1 values) syms y2a pi2a ya pia va y2t pi2t yt pit vt ; % Monetary policy rule ia = q1*ya+q2*pia; % ia = q1*(ya-yt)+q2*pia; %%option speed limit policy % Model equations IS = rho*y2a+(1-rho)yt-sigma(ia-pi2a)-ya; AS = beta*pi2a+(1-beta)*pit+alpha*ya-pia+va; dum1 = ya-y2t; dum2 = pia-pi2t; MPs = phi*vt-va; optcon = [IS ; AS ; dum1 ; dum2; MPs]; He then computes the matrix A: %% ------------------ [3] Linearization proc ------------------------%% % Differentiation xx = [y2a pi2a ya pia va y2t pi2t yt pit vt] ; % define vars jopt = jacobian(optcon,xx); % Define Linear Coefficients coef = eval(jopt); B = [ -coef(:,1:5) ] ; C = [ coef(:,6:10) ] ; % B[c(t+1) l(t+1) k(t+1) z(t+1)] = C[c(t) l(t) k(t) z(t)] A = inv(C)*B ; %(Linearized reduced form ) As far as I understand, this A is the solution to the system. It's the matrix that turns time t+1 and t+2 variables into t and t+1 variables (it's a forward-looking model). My question is essentially why is it necessary to reverse the signs of all the partial derivatives in B in order to get this solution? I'm talking about this step: B = [ -coef(:,1:5) ] ; Reversing the sign here obviously reverses the sign of every component of A, but I don't have a clear understanding of why it's necessary. My apologies if the question is unclear or if this isn't the best place to ask.

    Read the article

  • Optimizing spacing of mesh containing a given set of points

    - by Feynman
    I tried to summarize the this as best as possible in the title. I am writing an initial value problem solver in the most general way possible. I start with an arbitrary number of initial values at arbitrary locations (inside a boundary.) The first part of my program creates a mesh/grid (I am not sure which is the correct nuance), with N points total, that contains all the initial values. My goal is to optimize the mesh such that the spacing is as uniform as possible. My solver seems to work half decently (it needs some more obscure debugging that is not relevant here.) I am starting with one dimension. I intend to generalize the algorithm to an arbitrary number of dimensions once I get it working consistently. I am writing my code in fortran, but feel free to reply with pseudocode or the language of your choice. Allow me to elaborate with an example: Say I am working on a closed interval [1,10] xmin=1 xmax=10 Say I have 3 initial points: xmin, 5 and xmax num_ivc=3 known(num_ivc)=[xmin,5,xmax] //my arrays start at 1. Assume "known" starts sorted I store my mesh/grid points in an array called coord. Say I want 10 points total in my mesh/grid. N=10 coord(10) Remember, all this is arbitrary--except the variable names of course. The algorithm should set coord to {1,2,3,4,5,6,7,8,9,10} Now for a less trivial example: num_ivc=3 known(num_ivc)=[xmin,5.5,xmax or just num_ivc=1 known(num_ivc)=[5.5] Now, would you have 5 evenly spaced points on the interval [1, 5.5] and 5 evenly spaced points on the interval (5.5, 10]? But there is more space between 1 and 5.5 than between 5.5 and 10. So would you have 6 points on [1, 5.5] followed by 4 on (5.5 to 10]. The key is to minimize the difference in spacing. I have been working on this for 2 days straight and I can assure you it is a lot trickier than it sounds. I have written code that only works if N is large only works if N is small only works if it the known points are close together only works if it the known points are far apart only works if at least one of the known points is near a boundary only works if none of the known points are near a boundary So as you can see, I have coded the gamut of almost-solutions. I cannot figure out a way to get it to perform equally well in all possible scenarios (that is, create the optimum spacing.)

    Read the article

  • No exception, no error, still i dont recieve the json object from my http post

    - by user2978538
    My source code: final Thread t = new Thread() { public void run() { Looper.prepare(); HttpClient client = new DefaultHttpClient(); HttpConnectionParams.setConnectionTimeout(client.getParams(), 10000); HttpResponse response; JSONObject obj = new JSONObject(); try { HttpPost post = new HttpPost("http://pc.dyndns-office.com/mobile.asp"); obj.put("Model", ReadIn1); obj.put("Product", ReadIn2); obj.put("Manufacturer", ReadIn3); obj.put("RELEASE", ReadIn4); obj.put("SERIAL", ReadIn5); obj.put("ID", ReadIn6); obj.put("ANDROID_ID", ReadIn7); obj.put("Language", ReadIn8); obj.put("BOARD", ReadIn9); obj.put("BOOTLOADER", ReadIn10); obj.put("BRAND", ReadIn11); obj.put("CPU_API", ReadIn12); obj.put("DISPLAY", ReadIn13); obj.put("FINGERPRINT", ReadIn14); obj.put("HARDWARE", ReadIn15); obj.put("UUID", ReadIn16); StringEntity se = new StringEntity(obj.toString()); se.setContentType(new BasicHeader(HTTP.CONTENT_TYPE, "application/json")); post.setEntity(se); post.setHeader("host", "http://pc.dyndns-office.com/mobile.asp"); response = client.execute(post); if (response != null) { InputStream in = response.getEntity().getContent(); } } catch (Exception e) { e.printStackTrace(); } Looper.loop(); } }; t.start(); } } i want to send an Json object to a Website. As far as I can see, I set the header, but still I get this exception, can someone help me? (I'm using Android-Studio) __ Edit: i don't get any exceptions anymore, but still i do not receive the json packet. When i manually call the website i get a log file entry. Does anyone know, what's wrong? Edit2: When i debug i get as response "HTTP/1.1 400 bad request" i'm sure its not an permission problem. Any ideas?

    Read the article

  • Problem with return 2 libc method

    - by jth
    Hi, I'am trying to understand the return2libc method. I'am using an ubuntu linux 9.10, 32 bit with ASLR disabled. In theory, it sounds quite easy, overwrite the saved eip with the address of system() (or whatever function you want), then put the address to which system() should return and after that, the parameter for system, the "/bin/bash"-string. But what happens is that my exploit keeps segfaulting the vulnerable program. I assume something with the system()-address went wrong. This is what I did so far: Determined the address of system(): (gdb) print system $1 = {<text variable, no debug info>} 0x167020 <system> (gdb) x/x system 0x167020 <system>: 0x890cec83 I used the subsequent x/x system because those 3 bytes returned by print system looks like an index in some sort of jumptable (PLT?), so I assume 0x890cec83 is the right address which is used to overwrite the saved eip. After that I determined the address of the /bin/bash string in memory, using a small C program which basically consists of this line: printf("Address of string /bin/bash: %p\n", getenv("SHELL")); Then I looked a little bit around in the memory and fount /bin/bash: (gdb) x/s 0xbffff6ca 0xbffff6ca: "/bin/bash" After I gathered this information, I filled the buffer: (gdb) b 9 Breakpoint 1 at 0x8048407: file victim.c, line 9. (gdb) r `perl -e 'print "A"x9 . "\x83\xec\x0c\x89FAKE\xca\f6\ff\bf";'` Breakpoint 1, main (argc=1111638594, argv=0xc360cca) at victim.c:10 10 return 0; (gdb) x/s 0xbffff6ca 0xbffff6ca: "/bin/bash" Stack frame looks like this: (gdb) i f Stack level 0, frame at 0xbffff440: eip = 0x8048407 in main (victim.c:10); saved eip 0x890cec83 source language c. Arglist at 0xbffff438, args: argc=1111638594, argv=0xc360cca Locals at 0xbffff438, Previous frame's sp is 0xbffff440 Saved registers: ebp at 0xbffff438, eip at 0xbffff43c This seems all right to me, saved eip was overwritten with the (hopefully) correct system()-address, return address for system was set to "FAKE" (shouldn't matter) and the address of /bin/bash also seems to be correct. When I'am continuing the execution, victim segfaults on some strange address and certainly not in 0x890cec83: (gdb) cont Continuing. Program received signal SIGSEGV, Segmentation fault. 0x0804840d in main (argc=Cannot access memory at address 0x41414149 ) at victim.c:11 11 } Has anyone an explanation or a hint what happens here and why the execution isn't redirected to 0x890cec83? Thanks in advance, any hint, and be it only vague, would be appreciated. I have no idea why this doesn't work.

    Read the article

  • Adding row to Datalist

    - by Scrappy
    I have been searching around the web for a solution to this issue but have come across nothing so far. Basically I have a table as shown below, whhich i made up of itemtemplate fields and is populated by a dataset from my database. It shows brands to the user of which they can then click and go onto another page. I need to add another option to the table called "All Brands". Thus then I can use this to go to a page showing all the brands. However I can not seem to easily add this into the datalist. <asp:DataList id="TypesList" runat="server" Visible="true" RepeatColumns="3" Width="100%" ItemStyle-Width="25%" ItemStyle-HorizontalAlign="Center"> <ItemTemplate> <div style="position:relative;vertical-align:top;"> <table border="0" cellpadding="0" cellspacing="0" width="100%"> <tr> <td align="center" style="height:170px;vertical-align:top;text-align:center;" valign="top"> <asp:Label ID="lblID" runat="server" Visible="false" Text='<%#DataBinder.Eval(Container.DataItem,"batteryTypeID")%>'></asp:Label> <a href='/<%#DataBinder.Eval(Container.DataItem,"catid")%>/<%#DataBinder.Eval(Container.DataItem,"catname")%>/<%#DataBinder.Eval(Container.DataItem,"brandid")%>/<%#DataBinder.Eval(Container.DataItem,"brand_name")%>/<%#DataBinder.Eval(Container.DataItem,"batteryTypeID")%>/<%#DataBinder.Eval(Container.DataItem,"typeName")%>' target='_self'> <asp:Image ID="imgProd" runat="server" ImageUrl='images/none.jpg' /> </a> </td> </tr> <tr> <td class="productdesc" style="text-align:center;vertical-align:top;"> <span style="color:#000;font-weight:bold;font-size:120%;"> <%#DataBinder.Eval(Container.DataItem, "typeName").ToString%> </span> </td> </tr> </table> </div> </ItemTemplate> </asp:DataList> Thanks in advance for any help

    Read the article

  • Is this scatter-brained workflow realizable in Git?

    - by Luke Maurer
    This is what I'd like my workflow to look like at a conceptual level: I hack on my new feature for a while I notice a typo in a comment I change it Since the typo is completely unrelated to anything else, I put that change in a pile of comment fixes I keep working on the code I realize I need to flesh out a few utility functions I do so I put that change in its own pile Steps 2, 3, and 4 each repeat throughout the day I finish the new feature and put the changes for that feature in a pile I push nice patches upstream: One with the new feature, a few for the other tweaks, and one with a bunch of comment fixes if enough have accumulated Since I'm both lazy and a perfectionist, I want to be able to do some things out of order: I might correct a typo but forget to put it in the comment fix pile; when I prepare the upstream patches (I'm using git-svn, so I need to be pretty deliberate about these), I'll then pull out the comment fixes at that point. I might forget to separate things altogether until the very end. But I might /also/ have committed some of the piles along the way (sorry, the metaphor is breaking down …). This is all rather like just using Eclipse changesets with SVN, only I can have different changes to the same file in different piles (having to disentangle changes into different commits is what motivated me to move to git-svn, in fact …), and with Git I can have my full discombobulated change history, experimental branches and all, but still make a nice, neat patch. I've just recently started with Git after having wanted to for a good while, and I'm quite happy so far. The biggest way in which the above workflow doesn't really map into Git, though, is that a “bin” can't really be just a local branch, since the working tree only ever reflects the state of a single branch. Or maybe the Git index is a “pile,” and what I want is to have more than one somehow (effectively). I can think of a few ways to approximate what I want (maybe creative use of stash? Intricate stash-checkout-merge dances?), but my grasp on Git isn't solid enough to be sure of how best to put all the pieces together. It's said that Git is more a toolkit than a VCS, so I guess the question comes down to: How do I build this thing with these tools?

    Read the article

  • Good working habits to observe in project development?

    - by Will Marcouiller
    As my development experience grows, I see fit to stick to best practices from here and there to build somehow my own working practices while observing the conventions, etc. I'm currently working on a project which my goals is to graduate the security access model from an environment's Active Directory to another environment's automatically. I don't know for any of you, but as far as I'm concerned, I meet some real difficulties sticking to only one way, then develop. I mean, I learn something new everyday while visiting SO, and recently wanted to get acquainted with generics. On the other hand, I better know the Façade pattern which proved to be very practical in transactional programming in process systems. This seems to be less practical for desktop application as there are plenty of variables to consider in a desktop application that you don't have to care in transactional programming, as you're playing only with information data. As for my current project, I have: Groups; Organizational Units; Users. Which are all considered an entry in the Active Directory. This points out to be a good candidate for generics, as also approached this way by Bart de Smett's Linq to AD on CodePlex. He has a DirectorySource<T>, and to manage let's say groups, then he instantiate a source with the proper type: var groups = new DirectorySource<Group>(); This seems to be very a good way of doing. Despite, I seem to go from one pattern to another and I don't seem to be able to strictly stick to one. While I'm aware that one must not stay with only one way of doing, since each pattern statisfies certain advantages, while also illustrating disadvantages under some usage conditions, I seem to want to develop with both patterns having a singleton Façade class with the underlying factories which represent the sub systems: GroupsFactory; UsersFactory; OrganizationalUnitsFactory. Each of the factories offers the possible operations for their respective entity (group, user, OU). To make a very long story short, I often have plenty of ideas while developping and this causes me some trouble, as I go from an idea to another feeling completely lost after a while. Yet I understand the advantages and disavantages, I have no trouble choosing from one pattern to another depending on the situation. Nevertheless, when it comes to programming itself, if I'm not part of a team, I feel sometimes like I can't do anything good. That is, because I can't stand not doing something "perfect" the first time. The role I play within the project is both: the project manager and the programmer. I am more comfortable in the project manager role, architectural role, analytical role than the developer's. Has any of you some good habbits to observe in project development? Thanks to you all! =)

    Read the article

  • retrieving information from web service calls

    - by Monte Chan
    Hi all, I am trying to retrieve information from a web service call. The following is what I have so far. In my text view, it is showing Map {item=anyType{key=TestKey; value=2;}; item=anyType{key=TestField; value=adsfasd; };} When I ran that in the debugger, I can see the information above in the variable, tempvar. But the question is, how do I retrieve the information (i.e. the actual values of "key" and "value" in each of the array positions)? Yes, I know there is a lot going on in onCreate and I will fix it later. Thanks in advance, Monte My codes are as follows, import java.util.Vector; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; import org.ksoap2.SoapEnvelope; import org.ksoap2.serialization.SoapObject; import org.ksoap2.serialization.SoapSerializationEnvelope; import org.ksoap2.transport.AndroidHttpTransport; public class ViewHitUpActivity extends Activity { private static final String SOAP_ACTION = "test_function"; private static final String METHOD_NAME = "test_function"; private static final String NAMESPACE = "http://www.monteandjanicechan.com/"; private static final String URL = "http://www.monteandjanicechan.com/ws/test_ws.cfc?wsdl"; // private Object resultRequestSOAP = null; private TextView tv; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); SoapObject request = new SoapObject(NAMESPACE, METHOD_NAME); tv = (TextView)findViewById(R.id.people_view); //SoapObject request.addProperty("test_item", "1"); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); envelope.setOutputSoapObject(request); AndroidHttpTransport androidHttpTransport = new AndroidHttpTransport(URL); try { androidHttpTransport.call(SOAP_ACTION, envelope); /* resultRequestSOAP = envelope.getResponse(); Vector tempResult = (Vector) resultRequestSOAP("test_functionReturn"); */ SoapObject resultsRequestSOAP = (SoapObject) envelope.bodyIn; Vector tempResult = (Vector) resultsRequestSOAP.getProperty("test_functionReturn"); int testsize = tempResult.size(); // SoapObject test = (SoapObject) tempResult.get(0); //String[] results = (String[]) resultRequestSOAP; Object tempvar = tempResult.elementAt(1); tv.setText(tempvar.toString()); } catch (Exception aE) { aE.printStackTrace (); tv.setText(aE.getClass().getName() + ": " + aE.getMessage()); } } }

    Read the article

  • The question about the basics of LINQ to SQL

    - by Alex
    I just started learning LINQ to SQL, and so far I'm impressed with the easy of use and good performance. I used to think that when doing LINQ queries like from Customer in DB.Customers where Customer.Age > 30 select Customer LINQ gets all customers from the database ("SELECT * FROM Customers"), moves them to the Customers array and then makes a search in that Array using .NET methods. This is very inefficient, what if there are hundreds of thousands of customers in the database? Making such big SELECT queries would kill the web application. Now after experiencing how actually fast LINQ to SQL is, I start to suspect that when doing that query I just wrote, LINQ somehow converts it to a SQL Query string SELECT * FROM Customers WHERE Age > 30 And only when necessary it will run the query. So my question is: am I right? And when is the query actually run? The reason why I'm asking is not only because I want to understand how it works in order to build good optimized applications, but because I came across the following problem. I have 2 tables, one of them is Books, the other has information on how many books were sold on certain days. My goal is to select books that had at least 50 sales/day in past 10 days. It's done with this simple query: from Book in DB.Books where (from Sale in DB.Sales where Sale.SalesAmount >= 50 && Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID).Contains(Book.ID) select Book The point is, I have to use the checking part in several queries and I decided to create an array with IDs of all popular books: var popularBooksIDs = from Sale in DB.Sales where Sale.SalesAmount >= 50 && Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID; BUT when I try to do the query now: from Book in DB.Books where popularBooksIDs.Contains(Book.ID) select Book It doesn't work! That's why I think that we can't use thins kinds of shortcuts in LINQ to SQL queries, like we can't use them in real SQL. We have to create straightforward queries, am I right?

    Read the article

  • Stuck with Regular Expression code to apply HTML tag to text but exclude if inside <?> tag

    - by James Buckingham
    Hi there. I'm trying to write a bit of regex which would go through some text, written by our Editors, and apply an <acronym> tag to the first instance it finds of an abbreviation set we hold in our "Glossary of Terms". So for this example I've used the abbreviation ITS. 1st thing I thought I'd do is setup an example with a mix of scenerios I could test against, i.e. ITS sitting with punctuation, in HTML tags & ones that we've applied this to already (in other words the script has run through this before, so no need to do again). I'm almost there but just got stuck at the last point :-(. Here's the regex I've got so far - <[^<|]+?>?>ITS<[^<]+?>|ITS The Example - FROM ( EVERY ITS IN BOLD TO BE WRAPPED WITH ACRONYM ): I want you to tag thisITS, but not this wrapped one - <acronym title="ITS" id="thisIsATest">ITS</acronym> This is another test as I still want to update <p>ITS</p> that have other HTML tags wrapped around them.` ITS want ones that start sentences and ones that finish ITS. ITS, and ones which are wrapped in punctuation.` Test link: <a href="index.cfm>ITS</a> AND I WANT THIS CHANGE TO : I want you to tag this <acronym title="ITS">ITS</acronym>, but not this wrapped one - <acronym title="ITS">ITS</acronym> This is another test as I still want to update <acronym title="ITS">ITS</acronym> that have other HTML tags wrapped around them.` <acronym title="ITS">ITS</acronym> want ones that start sentences and ones that finish <acronym title="ITS">ITS</acronym>. <acronym title="ITS">ITS</acronym>, and ones which are wrapped in punctuation. Test link: <acronym title="ITS"><a href="index.cfm>ITS</a></acronym> Are there any Reg Ex experts out there that could help me finish this off? Any other hints tips would also be appreciated. Thanks a lot, James P.S. This is going to be placed in a ColdFusion application if that helps anyone in specific syntax.

    Read the article

  • Cutom event dispatchment location

    - by Martino Wullems
    Hello, I've been looking into custom event (listeners) for quite some time, but never succeeded in making one. There are so many different mehods, extending the Event class, but also Extending the EventDispatcher class, very confusing! I want to settle with this once and for all and learn the appriopate technique. package{ import flash.events.Event; public class CustomEvent extends Event{ public static const TEST:String = 'test'; //what exac is the purpose of the value in the string? public var data:Object; public function CustomEvent(type:String, bubbles:Boolean = false, cancelable:Boolean = false, data:Object = null):void { this.data = data; super(); } } } As far as I know a custom class where you set the requirements for the event to be dispatched has to be made: package { import flash.display.MovieClip; public class TestClass extends MovieClip { public function TestClass():void { if (ConditionForHoldToComplete == true) { dispatchEvent(new Event(CustomEvent.TEST)); } } } } I'm not sure if this is correct, but it should be something along the lines of this. Now What I want is something like a mouseevent, which can be applied to a target and does not require a specific class. It would have to work something like this: package com.op_pad._events{ import flash.events.MouseEvent; import flash.utils.Timer; import flash.events.TimerEvent; import flash.events.EventDispatcher; import flash.events.Event; public class HoldEvent extends Event { public static const HOLD_COMPLETE:String = "hold completed"; var timer:Timer; public function SpriteEvent(type:String, bubbles:Boolean=true, cancelable:Boolean=false) { super( type, bubbles, cancelable ); timer = new Timer(1000, 1); //somehow find the target where is event is placed upon -> target.addEventlistener target.addEventListener(MouseEvent.MOUSE_DOWN, startTimer); target.addEventListener(MouseEvent.MOUSE_UP, stopTimer); } public override function clone():Event { return new SpriteEvent(type, bubbles, cancelable); } public override function toString():String { return formatToString("MovieEvent", "type", "bubbles", "cancelable", "eventPhase"); } ////////////////////////////////// ///// c o n d i t i o n s ///// ////////////////////////////////// private function startTimer(e:MouseEvent):void { timer.start(); timer.addEventListener(TimerEvent.TIMER_COMPLETE, complete); } private function stopTimer(e:MouseEvent):void { timer.stop() } public function complete(e:TimerEvent):void { dispatchEvent(new HoldEvent(HoldEvent.HOLD_COMPLETE)); } } } This obviously won't work, but should give you an idea of what I want to achieve. This should be possible because mouseevent can be applied to about everything.The main problem is that I don't know where I should set the requirements for the event to be executed to be able to apply it to movieclips and sprites. Thanks in advance

    Read the article

  • Normalization of database for timesheet tool and ensure data integrity

    - by fireeyedboy
    I'm creating a timesheet application. I have the following entities (amongst others): Company Employee = an employee associated with a company Client = a client associated with a company So far I have the following (abbreviated) database setup: Company - id - name Employee - id - companyId (FK to Company.id) - name Client - id - companyId (FK to Company.id) - name Now, I want an employee to be associated with a client, but only if that client is associated with the company the employee works for. How would you guarantee this data integrity on a database level? Or should I just depend on the application to guarantee this data integrity? I thought about creating a many to many table like this: EmployeeClient - employeeId (FK to Employee.id) - companyId \ (combined FK to Client.companyId, Client.id) - clientId / Thus, when I insert a client for an employee along with the employee's company id, the database should prevent this when the client is not associated with the employee's company id. Does this make sense? Because this still doesn't guarantee the employee is associated with the company. How do you deal with these things? UPDATE The scenario is as followed: A company has multiple employees. Employees will only be linked to one company. A company has multiple clients also. Clients will only be linked to one company. (Company is a sandbox, so to speak). An employee of a company can be linked to a client of it's company, but only if the client is part of the company's clientele. In other words: The application will allow a company to create/add employees and create/add clients (hence the companyId FK in the Employee and Client tables). Next, the company will be allowed to assign certain clients to certain of it's employees (EmployeeClient table). Imagine an employee working on projects for a few clients for which s/he can write billable hours, but the employee must not be allowed to write billable hours for clients they are not assigned to by their employer (the company). So, employees will not automatically have access to all their company's clients, but only to those that the company has selected for them. Hopefully this has shed some more light on the matter.

    Read the article

  • SurfaceView for Camera Preview won't get destroyed when pressing Power-Botton

    - by for3st
    I want to implement a camera preview. For that I have a custom View CameraView extends ViewGroup that in the constructor programatically creates an surfaceView. I have the following components (higly simplified for beverity): ScannerFragment.java public View onCreateView(..) { //inflate view and get cameraView } public void onResume() { //open camera -> set rotation -> startPreview (in a thread) -> //set preview callback -> start decoding worker } public void onPause() { // stop decoding worker -> stop Preview -> release camera } CameraView.java extends ViewGroup public void setUpCalledInConstructor(Context context) { //create a surfaceview and add it to this viewgroup -> //get SurfaceHolder and set callback } /* SurfaceHolder.Callback */ public void surfaceCreated(SurfaceHolder holder) { camera.setPreviewDisplay(holder); } public void surfaceDestroyed(SurfaceHolder holder) { //NOTHING is done here } public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) { camera.getParameters().setPreviewSize(previewSize.width, previewSize.height); } fragment_scanner.xml <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent"> <com.myapp.camera.CameraView android:id="@+id/cameraPreview" android:layout_width="match_parent" android:layout_height="match_parent"/> </RelativeLayout> I think I have set the lifecycle correct (getting resources onResume(), releasing it onPause() roughly said) and the following works just fine: pressing home and returning pressing Taskswitcher and returning rotation But one thing doesn't work and that is when I press the power-button on the device and then return to the camera-preview. The result is: the preview is stuck with the image that was last captured before button was pressed. If I rotate it works fine again, since it will get through the lifecycle. After some research I found out that this is probably due to the fact that surfaceView won't get destroyed when the power-button is pressed, i.e. SurfaceHolder.Callback.surfaceDestroyed(SurfaceHolder holder) won't be called. And in fact when I compare the (very verbose) log output of the home-button-case and the power-button-case it's the same except that 'surfaceDestroyed' won't get called. So far I found no solution whatsoever to work around it. I purposely avoid any resource cleaning code in my surfaceDestroyed(), but this does not help. My idea was to manually destroy the surfaceView like asked in this question but this seems not possible. I also tested other applications with surfaceViews/cameras and they don't seem to have this issue. So I would appreciate any hints or tips on that.

    Read the article

  • Avoiding stack overflows in wrapper DLLs

    - by peachykeen
    I have a program to which I'm adding fullscreen post-processing effects. I do not have the source for the program (it's proprietary, although a developer did send me a copy of the debug symbols, .map format). I have the code for the effects written and working, no problems. My issue now is linking the two. I've tried two methods so far: Use Detours to modify the original program's import table. This works great and is guaranteed to be stable, but the user's I've talked to aren't comfortable with it, it requires installation (beyond extracting an archive), and there's some question if patching the program with Detours is valid under the terms of the EULA. So, that option is out. The other option is the traditional DLL-replacement. I've wrapped OpenGL (opengl32.dll), and I need the program to load my DLL instead of the system copy (just drop it in the program folder with the right name, that's easy). I then need my DLL to load the Cg framework and runtime (which relies on OpenGL) and a few other things. When Cg loads, it calls some of my functions, which call Cg functions, and I tend to get stack overflows and infinite loops. I need to be able to either include the Cg DLLs in a subdirectory and still use their functions (not sure if it's possible to have my DLLs import table point to a DLL in a subdirectory) or I need to dynamically link them (which I'd rather not do, just to simplify the build process), something to force them to refer to the system's file (not my custom replacement). The entire chain is: Program loads DLL A (named opengl32.dll). DLL A loads Cg.dll and dynamically links (GetProcAddress) to sysdir/opengl32.dll. I now need Cg.dll to also refer to sysdir/opengl32.dll, not DLL A. How would this be done? Edit: How would this be done easily without using GetProcAddress? If nothing else works, I'm willing to fall back to that, but I'd rather not if at all possible. Edit2: I just stumbled across the function SetDllDirectory in the MSDN docs (on a totally unrelated search). At first glance, that looks like what I need. Is that right, or am I misjudging? (off to test it now) Edit3: I've solved this problem by doing thing a bit differently. Instead of dropping an OpenGL32.dll, I've renamed my DLL to DInput.dll. Not only does it have the advantage of having to export one function instead of well over 120 (for the program, Cg, and GLEW), I don't have to worry about functions running back in (I can link to OpenGL as usual). To get into the calls I need to intercept, I'm using Detours. All in all, it works much better. This question, though, is still an interesting problem (and hopefully will be useful for anyone else trying to do crazy things in the future). Both the answers are good, so I'm not sure yet which to pick...

    Read the article

  • average velocity, as3

    - by VideoDnd
    Hello, I need something accurate I can plug equations in to if you can help. How would you apply the equation bellow? Thanks guys. AVERAGE VELOCITY AND DISPLACEMENT average velocity V=X/T displacement x=v*T more info example I have 30 seconds and a field that is 170 yards. What average velocity would I need my horse to travel at to reach the end of the field in 30 seconds. I moved the decimal places around and got this. Here's what I tried 'the return value is close, but not close enough' FLA here var TIMER:int = 10; var T:int = 0; var V:int = 5.6; var X:int = 0; var Xf:int = 17000/10*2; var timer:Timer = new Timer(TIMER,Xf); timer.addEventListener(TimerEvent.TIMER, incrementCounter); timer.start(); function formatCount(i:int):String { var fraction:int = Math.abs(i % 100); var whole:int = Math.abs(i / 100); return ("0000000" + whole).substr(-7, 7) + "." + (fraction < 10 ? "0" + fraction : fraction); } function incrementCounter(event:TimerEvent) { T++; X = Math.abs(V*T); text.text = formatCount(X); } tests TARGET 5.6yards * 30seconds = 168yards INTEGERS 135.00 in 30 seconds MATH.ROUND 135.00 in 30 seconds NUMBERS 140.00 in 30 seconds control timer 'I tested with this and the clock on my desk' var timetest:Timer = new Timer(1000,30); var Dplus:int = 17000; timetest.addEventListener(TimerEvent.TIMER, cow); timetest.start(); function cow(evt:TimerEvent):void { tx.text = String("30 SECONDS: " + timetest.currentCount); if(timetest.currentCount> Dplus){ timetest.stop(); } } //far as I got...couldn't get delta to work... T = (V*timer.currentCount); X += Math.round(T);

    Read the article

  • How can I structure my MustacheJS template to add dynamic classes based on the values from a JSON file?

    - by JGallardo
    OBJECTIVE To build an app that allows the user to search for locations. CURRENT STATE At the moment the locations listed are few, so they are just all presented when landing on the "dealers" page. BACKGROUND Previously there were only about 50 showrooms carrying a product we sell, so a static HTML page was fine. And displays as But the page size grew to about 1500 lines of code after doing this. We have gotten more and need a scalable solution so that we can add many more dealers fast. In other projects, I have previously used MustacheJS and to load in values from a JSON file. I know the ideally this will be an AJAX application. Perhaps I might be better off with database here? Below is what I have in mind so far, and it "works" up to a certain point, but seems not to be anywhere near the most sustainable solution that can be efficiently scaled. HTML <a id="{{state}}"></a> <div> <h4>{{dealer}} : {{city}}, {{state}} {{l_type}}</h4> <div class="{{icon_class}}"> <ul> <li><i class="icon-map-marker"></i></li> <li><i class="icon-phone"></i></li> <li><i class="icon-print"></i></li> </ul> </div> <div class="listingInfo"> <p>{{street}} <br>{{suite}}<br> {{city}}, {{state}} {{zip}}<br> Phone: {{phone}}<br> {{toll_free}}<br> {{fax}} </p> </div> </div> <hr> JSON { "dealers" : [ { "dealer":"Benco Dental", "City":"Denver", "state":"CO", "zip":"80112", "l_type":"Showroom", "icon_class":"listingIcons_3la", "phone":"(303) 790-1421", "toll_free":null, "fax":"(303) 790-1421" }, { "dealer":"Burkhardt Dental Supply", "City":"Portland", "state":"OR", "zip":"97220", "l_type":"Showroom", "icon_class":"listingIcons", "phone":" (503) 252-9777", "toll_free":"(800) 367-3030", "fax":"(866) 408-3488" } ]} CHALLENGES The CSS class wrapping the ul will vary based on how many fields there are. In this case there are 3, so the class is "listingIcons_3la" The "toll free" number section should only show up if in fact, there is a toll free number. the fax number should only show up if there is a value for a fax number.

    Read the article

  • YASR - Yet another search and replace question

    - by petronius31
    Environment: asp.net c# openxml Ok, so I've been reading a ton of snippets and trying to recreate the wheel, but I'm hoping that somone can help me get to my desination faster. I have multiple documents that I need to merge together... check... I'm able to do that with openxml sdk. Birds are singing, sun is shining so far. Now that I have the document the way I want it, I need to search and replace text and/or content controls. I've tried using my own text - {replace this} but when I look at the xml (rename docx to zip and view the file), the { is nowhere near the text. So I either need to know how to protect that within the doucment so they don't diverge or I need to find another way to search and replace. I'm able to search/replace if it is an xml file, but then I'm back to not being able to combine the doucments easily. Code below... and as I mentioned... document merge works fine... just need to replace stuff. protected void exeProcessTheDoc(object sender, EventArgs e) { string doc1 = Server.MapPath("~/Templates/doc1.docx"); string doc2 = Server.MapPath("~/Templates/doc2.docx"); string final_doc = Server.MapPath("~/Templates/extFinal.docx"); File.Delete(final_doc); File.Copy(doc1, final_doc); using (WordprocessingDocument myDoc = WordprocessingDocument.Open(final_doc, true)) { string altChunkId = "AltChunkId2"; MainDocumentPart mainPart = myDoc.MainDocumentPart; AlternativeFormatImportPart chunk = mainPart.AddAlternativeFormatImportPart( AlternativeFormatImportPartType.WordprocessingML, altChunkId); using (FileStream fileStream = File.Open(doc2, FileMode.Open)) chunk.FeedData(fileStream); AltChunk altChunk = new AltChunk(); altChunk.Id = altChunkId; mainPart.Document.Body.InsertAfter(altChunk, mainPart.Document.Body.Elements<Paragraph>().Last()); mainPart.Document.Save(); } exeSearchReplace(final_doc); } protected void exeSearchReplace(string document) { using (WordprocessingDocument wordDoc = WordprocessingDocument.Open(document, true)) { string docText = null; using (StreamReader sr = new StreamReader(wordDoc.MainDocumentPart. GetStream())) { docText = sr.ReadToEnd(); } Regex regexText = new Regex("acvtClientName"); docText = regexText.Replace(docText, "Hi Everyone!"); using (StreamWriter sw = new StreamWriter(wordDoc.MainDocumentPart.GetStream(FileMode.Create))) { sw.Write(docText); } } } } }

    Read the article

  • FluentNHibernate Many-To-One References where Foreign Key is not to Primary Key and column names are

    - by Todd Langdon
    I've been sitting here for an hour trying to figure this out... I've got 2 tables (abbreviated): CREATE TABLE TRUST ( TRUSTID NUMBER NOT NULL, ACCTNBR VARCHAR(25) NOT NULL ) CONSTRAINT TRUST_PK PRIMARY KEY (TRUSTID) CREATE TABLE ACCOUNTHISTORY ( ID NUMBER NOT NULL, ACCOUNTNUMBER VARCHAR(25) NOT NULL, TRANSAMT NUMBER(38,2) NOT NULL POSTINGDATE DATE NOT NULL ) CONSTRAINT ACCOUNTHISTORY_PK PRIMARY KEY (ID) I have 2 classes that essentially mirror these: public class Trust { public virtual int Id {get; set;} public virtual string AccountNumber { get; set; } } public class AccountHistory { public virtual int Id { get; set; } public virtual Trust Trust {get; set;} public virtual DateTime PostingDate { get; set; } public virtual decimal IncomeAmount { get; set; } } How do I do the many-to-one mapping in FluentNHibernate to get the AccountHistory to have a Trust? Specifically, since it is related on a different column than the Trust primary key of TRUSTID and the column it is referencing is also named differently (ACCTNBR vs. ACCOUNTNUMBER)???? Here's what I have so far - how do I do the References on the AccountHistoryMap to Trust??? public class TrustMap : ClassMap<Trust> { public TrustMap() { Table("TRUST"); Id(x => x.Id).Column("TRUSTID"); Map(x => x.AccountNumber).Column("ACCTNBR"); } } public class AccountHistoryMap : ClassMap<AccountHistory> { public AccountHistoryMap() { Table("TRUSTACCTGHISTORY"); Id (x=>x.Id).Column("ID"); References<Trust>(x => x.Trust).Column("ACCOUNTNUMBER").ForeignKey("ACCTNBR").Fetch.Join(); Map(x => x.PostingDate).Column("POSTINGDATE"); ); I've tried a few different variations of the above line but can't get anything to work - it pulls back AccountHistory data and a proxy for the Trust; however it says no Trust row with given identifier. This has to be something simple. Anyone? Thanks in advance.

    Read the article

  • Using multiple aggregate functions in an algebraic expression in (ANSI) SQL statement

    - by morpheous
    I have the following aggregate functions (AGG FUNCs): foo(), foobar(), fredstats(), barneystats(). I want to know if I can use multiple AGG FUNCs in an algebraic expression. This may seem a strange/simplistic question for seasoned SQL developers - however, the but the reason I ask is that so far, all AGG FUNCs examples I have seen are of the simplistic variety e.g. max(salary) < 100, rather than using the AGG FUNCs in an expression which involves using multiple AGG FUNCs in an expression (like agg_func1() agg_func2()). The information below should help clarify further. Given tables with the following schemas: CREATE TABLE item (id int, length float, weight float); CREATE TABLE item_info (item_id, name varchar(32)); # Is it legal (ANSI) SQL to write queries of this format ? SELECT id, name, foo, foobar, fredstats FROM A, B (SELECT id, foo(123) as foo, foobar('red') as foobar, fredstats('weight') as fredstats FROM item GROUP BY id HAVING [ALGEBRAIC EXPRESSION] ORDER BY id AS A), item_info AS B WHERE item.id = B.id Where: ALGEBRAIC EXPRESSION is the type of expression that can be used in a WHERE clause - for example: ((foo(x) < foobar(y)) AND foobar(y) IN (1,2,3)) OR (fredstats(x) <> 0)) I am using PostgreSQL as the db, but I would prefer to use ANSI SQL wherever possible. Assuming it is legal to include AGG FUNCS in the way I have done above, I'd like to know: Is there a more efficient way to write the above query ? Is there any way I can speed up the query in terms of a judicious choice of indexes on the tables item and item_info ? Is there a performance hit of using AGG FUNCs in an algebraic expression like I am (i.e. an expression involving the output of aggregate functions rather than constants? Can the expression also include 'scaled' AGG FUNC? (for example: 2*foo(123) < -3*foobar(456) ) - will scaling (i.e. multiplying an AGG FUNC by a number have an effect on performance?) How can I write the query above using INNER JOINS instead?

    Read the article

  • Java map / nio / NFS issue causing a VM fault: "a fault occurred in a recent unsafe memory access op

    - by Matthew Bloch
    I have written a parser class for a particular binary format (nfdump if anyone is interested) which uses java.nio's MappedByteBuffer to read through files of a few GB each. The binary format is just a series of headers and mostly fixed-size binary records, which are fed out to the called by calling nextRecord(), which pushes on the state machine, returning null when it's done. It performs well. It works on a development machine. On my production host, it can run for a few minutes or hours, but always seems to throw "java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code", fingering one of the Map.getInt, getShort methods, i.e. a read operation in the map. The uncontroversial (?) code that sets up the map is this: /** Set up the map from the given filename and position */ protected void open() throws IOException { // Set up buffer, is this all the flexibility we'll need? channel = new FileInputStream(file).getChannel(); MappedByteBuffer map1 = channel.map(FileChannel.MapMode.READ_ONLY, 0, channel.size()); map1.load(); // we want the whole thing, plus seems to reduce frequency of crashes? map = map1; // assumes the host writing the files is little-endian (x86), ought to be configurable map.order(java.nio.ByteOrder.LITTLE_ENDIAN); map.position(position); } and then I use the various map.get* methods to read shorts, ints, longs and other sequences of bytes, before hitting the end of the file and closing the map. I've never seen the exception thrown on my development host. But the significant point of difference between my production host and development is that on the former, I am reading sequences of these files over NFS (probably 6-8TB eventually, still growing). On my dev machine, I have a smaller selection of these files locally (60GB), but when it blows up on the production host it's usually well before it gets to 60GB of data. Both machines are running java 1.6.0_20-b02, though the production host is running Debian/lenny, the dev host is Ubuntu/karmic. I'm not convinced that will make any difference. Both machines have 16GB RAM, and are running with the same java heap settings. I take the view that if there is a bug in my code, there is enough of a bug in the JVM not to throw me a proper exception! But I think it is just a particular JVM implementation bug due to interactions between NFS and mmap, possibly a recurrence of 6244515 which is officially fixed. I already tried adding in a "load" call to force the MappedByteBuffer to load its contents into RAM - this seemed to delay the error in the one test run I've done, but not prevent it. Or it could be coincidence that was the longest it had gone before crashing! If you've read this far and have done this kind of thing with java.nio before, what would your instinct be? Right now mine is to rewrite it without nio :)

    Read the article

  • WiX custom action with DTF... quite confused...

    - by Joshua
    Okay, I have decided the only way I can do what I want to do with WiX (thanks to an old installer I didn't write that I now have to upgrade) is with some CUSTOM ACTIONS. Basically, I need to back up a file before the RemoveExistingProducts and restore that file again after RemoveExistingProducts. I think this is what's called a "type 2 custom action." The sequencing I think I understand, however, what I don't understand is first of all how I pass data to my C# action (the directory the file is in from the WiX) and how to reference my C# (DTF?) action with the Binary and CustomAction tags. Also, does all this need to be in a tag? All the examples show it that way. Here is what I have so far in the .WXS file... <Binary Id="backupSettingsAction.dll" SourceFile="backupSettingsAction.CA.dll"/> <CustomAction Id="BackupSettingsAction" BinaryKey="backupSettingsAction.dll" DllEntry="CustomAction" Execute="immediate" /> <InstallExecuteSequence> <Custom Action="backupSettingsAction.dll" Before="InstallInitialize"/> <RemoveExistingProducts After="InstallFinalize" /> <Custom Action="restoreSettingsAction.dll" After="RemoveExistingFiles"/> </InstallExecuteSequence> The file I need to back up is a settings file from the previous install (which needs to remain intact), it is located in the directory: <Directory Id="CommonAppDataFolder" Name="CommonAppData"> <Directory Id="CommonAppDataPathways" Name="Pathways" /> </Directory> And even has a Component tag for it, though I need to back the file up that exists already: <Component Id="Settings" Guid="A3513208-4F12-4496-B609-197812B4A953" NeverOverwrite="yes" > <File Id="settingsXml" ShortName="SETTINGS.XML" Name="Settings.xml" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\Settings\settings.xml" Vital="yes" /> </Component> And this is referencing the C# file that Visual Studio (2005) created for me: namespace backupSettingsAction { public class CustomActions { [CustomAction] public static ActionResult CustomAction1(Session session) { session.Log("backing up settings file"); //do I hardcode the directory and name of the file in here, or can I pass them in? return ActionResult.Success; } } } Any help is greatly apprecaited. Thank you!

    Read the article

  • Accessing different connection strings at runtime in ASP.NET MVC 1

    - by Neil T.
    I'm trying to implement integration testing in my ASP.NET MVC 1.0 solution. The technologies in use are LINQ-to-SQL, NUnit and WatiN. I recently discovered a pattern that will allow me to create a testing version of the database on the fly without modifying the development version of the database. I needed this behavior in order to run my user interface tests in WatiN that may modify the database. The plan is to modify the connection string in the Web.config file, and pass that new connection string to the DataContext constructor. This way, I don't have to add routes or modify my URLs in order to perform the integration testing. I've set up the project so that the test setup can modify the connection string to point to the test database when the tests are running. The connection string is stored in web.config. The problem I'm having is that when I try to run the tests, I get a NullReferenceException when trying to access the HTTPContext. From everything that I have read so far, the HTTPContext is only available within the context of a controller. Here is the code for the property that is supposed to give me the reference to the Web.config file: private System.Configuration.Configuration WebConfig { get { ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); // NullReferenceException occurs on this line. fileMap.ExeConfigFilename = HttpContext.Current.Server.MapPath("~\\web.config"); System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); return config; } } Is there something that I am missing in order to make this work? Is there a better way to accomplish what I'm trying to achieve? UPDATE: I decided to abandon the modification of Web.config in lieu of a "request-scoped DataContext" pattern that I found here. From the looks of it, I believe it should give me the results I'm looking for. However, during the TextFixtureSetUp, I try to create a new copy of the database for testing purposes, and it fails silently. When I get to the tests, the repository still uses the production database connection string to load data.

    Read the article

  • Getting a KeyError in DB backend of django-digest

    - by rtmie
    I have just started to integrate django_digest into my app. As a start I have added the @httpdigest decorator to one of my views. If I try to connect to it I get a KeyError exception thrown in django_digest/backend/db.py . Depending on which db I configure I get a different KeyError in a different location. I am using Django 1.2.1, with MySql (also tested with sqlite). I am using the default values for all the settings options. As far as I can see I have followed all instructions but am struggling all day with this. I am using the repository versions of django-digest and python-digest. Any steer would be greatly appreciated. Tracebacks for sqlite and mysql below: with sqlite: Traceback (most recent call last): File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/servers/basehttp.py", line 674, in __call__ return self.application(environ, start_response) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/wsgi.py", line 248, in __call__ signals.request_finished.send(sender=self.__class__) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/dispatch/dispatcher.py", line 162, in send response = receiver(signal=self, sender=sender, **named) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django_digest-1.8-py2.5.egg/django_digest/backend/db.py", line 16, in close_connection _connection.close() File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/db/backends/sqlite3/base.py", line 186, in close if self.settings_dict['NAME'] != ":memory:": KeyError: 'NAME' with mysql: Traceback (most recent call last): File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/servers/basehttp.py", line 674, in __call__ return self.application(environ, start_response) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/wsgi.py", line 241, in __call__ response = self.get_response(request) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/base.py", line 142, in get_response return self.handle_uncaught_exception(request, resolver, exc_info) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/base.py", line 166, in handle_uncaught_exception return debug.technical_500_response(request, *exc_info) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/base.py", line 80, in get_response response = middleware_method(request) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django_digest-1.8-py2.5.egg/django_digest/middleware.py", line 13, in process_request if (not self._authenticator.authenticate(request) and File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django_digest-1.8-py2.5.egg/django_digest/__init__.py", line 86, in authenticate partial_digest = self._account_storage.get_partial_digest(digest_response.username) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django_digest-1.8-py2.5.egg/django_digest/backend/db.py", line 97, in get_partial_digest cursor = get_connection().cursor() File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/db/backends/__init__.py", line 75, in cursor cursor = self._cursor() File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/db/backends/mysql/base.py", line 281, in _cursor if settings_dict['USER']: KeyError: 'USER'

    Read the article

  • Auto populate input based on file name with AngularJS

    - by LouieV
    I am playing around with AngularJS and have not been able to solve this problem. I have a view that has a form to upload a file to a node server. So far I have manage to do this using some directives and a service. I allow the user to send a custom name to the POST data if they desire. What I wan to accomplish is that when the user selects a file the filename models auto populates. My view looks like: <div> <input file-model="phpFile" type="file"> <input name="filename" type="text" ng-model="filename"> <button ng-click="send()">send</button> </div> file-model is my directive that allows the file to be assigned to a scope. myApp.directive('fileModel', ['$parse', function($parse) { return { restrict: 'A', link: function(scope, element, attrs) { var model = $parse.(attrs.fileModel); var modelSetter = model.assign; element.bind('change', function() { scope.$apply(function() { modelSetter(scope, element[0].files[0]); }); }); } }]); The service: myApp.service('fileUpload', ['$http', function($http){ this.uploadFileToUrl = function(file, uploadUrl, optionals) { var fd = new FormData(); fd.append('file', file); for (var key in file) { fd.append(key, file[key]); } for(var i = 0; i < optionals.length; i++){ fd.append(optionals[i].name, optionals[i].data); } }); }]); Here as you can see I pass the file, append its properties, and append any optional properties. In the controller is where I am having the troubles. I have tried $watch and using the file-model but I get the same error either way. myApp.controller('AddCtrl', function($scope, $location, PEberry, fileUpload){ //$scope.$watch(function() { // return $scope.phpFile; //},function(newValue, oldValue) { // $scope.filename = $scope.phpFile.name; //}, true); // if ($scope.phpFiles) { // $scope.filename = $scope.phpFiles.name; // } $scope.send = function() { var uploadUrl = "/files"; var file = $scope.phpFile; //var opts = [{ name: "uname", data: file.name }] fileUpload.uploadFileToUrl(file, uploadUrl); }; }); Thank you for your help!

    Read the article

< Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >