Search Results

Search found 8715 results on 349 pages for 'bad sectors'.

Page 322/349 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • Why is cell phone software still so primitive?

    - by Tomislav Nakic-Alfirevic
    I don't do mobile development, but it strikes me as odd that features like this aren't available by default on most phones: full text search: searches all address book contents, messages, anything else being a plus better call management: e.g. a rotating audio call log, meaning you always have the last N calls recorded for your listening pleasure later (your little girl just said her first "da-da" while you were on a business trip, you had a telephone job interview, you received complex instructions to do something etc.) bluetooth remote control (like e.g. anyRemote, but available by default on a bluetooth phone) no multitasking capabilities worth mentioning and in general no e.g. weekly software updates, making the phone much more usable (even if it had to be done over USB, rather than over the network). I'm sure I was dumbfounded by the lack or design of other features as well, but they don't come to mind right now. To clarify, I'm not talking about smartphones here: my plain, 2-year old phone has a CPU an order of magnitude faster than my first PC, about as much storage space and it's ridiculous how bad (slow, unwieldy) the software is and it's not one phone or one manufacturer. What keeps the (to me) obvious software functionality vacuum on a capable hardware platform from being filled up? Edit: I believe a clarification on the multitasking point might be beneficial. I'll use my phone as an example, although the point is much more general. The phone can multitask and in fact does: you can listen to music and do something else at the same time. On the other hand, the way the software has been designed makes multitasking next to useless. (Ditto with the external touch screen: it can take touch commands, but only one application makes use of it, and only with 3 commands.) To take the multitasking example to the extreme, if I plug my phone into my laptop and it registers as an external disk, it doesn't allow any kind of operation: messages, calling, calendar, everything out of reach, although I can receive a call. No "battery life" issue there: it's charging while connected. BTW, another example of design below the current state of the art: I don't see a phone on the horizon which will remember where in an audio or video file you were when you stopped listening/watching it last time (podcasts are a good use case). Simplistic rewind/fast forward functionality only aggravates the problem.

    Read the article

  • Animating gradient displays line artifacts in ActionScript

    - by TheDarkIn1978
    i've programatically created a simple gradient (blue to red) sprite rect using my own basic class called GradientRect, but moving or animation the sprite exhibits line artifacts. when the sprite is rotating, it kind of resembles bad reception of an old television set. i'm almost certain the cause is because each line slice of the gradient is vector so there are gaps between the lines - this is visible when the sprite is zoomed in. var colorPickerRect:GradientRect = new GradientRect(200, 200, 0x0000FF, 0xFF0000); addChild(colorPickerRect); colorPickerRect.cacheAsBitmap = true; colorPickerRect.x = colorPickerRect.y = 100; colorPickerRect.addEventListener(Event.ENTER_FRAME, rotate); function rotate(evt:Event):void { evt.target.rotation += 1; } ________________________ //CLASS PACKAGE package { import flash.display.CapsStyle; import flash.display.GradientType; import flash.display.LineScaleMode; import flash.display.Sprite; import flash.geom.Matrix; public class GradientRect extends Sprite { public function GradientRect(gradientRectWidth:Number, gradientRectHeight:Number, ...leftToRightColors) { init(gradientRectWidth, gradientRectHeight, leftToRightColors); } private function init(gradientRectWidth:Number, gradientRectHeight:Number, leftToRightColors:Array):void { var leftToRightAlphas:Array = new Array(); var leftToRightRatios:Array = new Array(); var leftToRightPartition:Number = 255 / (leftToRightColors.length - 1); var pixelColor:Number; var i:int; //Push arrays for (i = 0; i < leftToRightColors.length; i++) { leftToRightAlphas.push(1); leftToRightRatios.push(i * leftToRightPartition); } //Graphics matrix and lineStyle var leftToRightColorsMatrix:Matrix = new Matrix(); leftToRightColorsMatrix.createGradientBox(gradientRectWidth, 1); graphics.lineStyle(1, 0, 1, false, LineScaleMode.NONE, CapsStyle.NONE); for (i = 0; i < gradientRectWidth; i++) { graphics.lineGradientStyle(GradientType.LINEAR, leftToRightColors, leftToRightAlphas, leftToRightRatios, leftToRightColorsMatrix); graphics.moveTo(i, 0); graphics.lineTo(i, gradientRectHeight); } } } } how can i solve this problem?

    Read the article

  • What are the types and inner workings of a query optimizer?

    - by Frank Developer
    As I understand it, most query optimizers are cost-based. Some can be influenced by hints like FIRST_ROWS(). Others are tailored for OLAP. Is it possible to know more detailed logic about how Informix IDS and SE's optimizers decide what's the best route for processing a query, other than SET EXPLAIN? Is there any documentation which illustrates the ranking of SELECT statements? I would imagine that "SELECT col FROM table WHERE ROWID = n" ranks 1st. What are the rest of them?.. If I'm not mistaking, Informix's ROWID is a SERIAL(INT) which allows for a max. of 2GB nrows, or maybe it uses INT9 for TB's nrows?.. However, I think Oracle uses HEX values for ROWID. Too bad ROWID can't be oftenly used, since a rows ROWID can change. So maybe ROWID is used by the optimizer as a counter? Perhaps, it could be used for implementing the query progress idea I mentioned in my "Begin viewing query results before query completes" question? For some reason, I feel it wouldn't be that difficult to report a query's progress while being processed, perhaps at the expense of some slight overhead, but it would be nice to know ahead of time: A "Google-like" estimate of how many rows meet a query's criteria, display it's progress every 100, 200, 500 or 1,000 rows, give users the ability to cancel it at anytime and start displaying the qualifying rows as they are being put into the current list, while it continues searching?.. This is just one example, perhaps we could think other neat/useful features, the ingridients are more or less there. Perhaps we could fine-tune each query with more granularity than currently available? OLTP queries tend to be mostly static and pre-defined. The "what-if's" are more OLAP, so let's try to add more control and intelligence to it? So, therefore, being able to more precisely control, not "hint-influence" a query is what's needed and therefore it would be necessary to know how the optimizers logic is programmed. We can then have Dynamic SELECT and other statements for specific situations! Maybe even tell IDS to read blocks of indexes nodes at-a-time instead of one-by-one, etc. etc.

    Read the article

  • java.util.Map with HtmlDataTable

    - by gerry
    Hi, I'm developing an application on GlassFish v3 which uses Suns-RI of JavaEE6 and JSF2.0, etc. And the bad thing is, that no changes/switches away from Suns RI can be made (to use MyFaces or something like that). Now, the problem is, that I want to build HtmlDatatable by hand ( in Java code). The datatable should represent a java.util.Map where the first column should display the key and the second the values of the map. I've build successfully a PanelGrid from a java.util.List and used every time the "setExpressionValue" methods of UIComponent to bind the UI to the underlying List. But now, this doesn't work with the Map. Here is a snippet of my code: public HtmlDataTable getEntityDetailsDataTable() { ... Application app = FacesContext.getCurrentInstance().getApplication(); HtmlDataTable component = (HtmlDataTable)app.createComponent(HtmlDataTable.COMPONENT_TYPE); component.setValueExpression("value", ExpressionUtil.createValueExpression("#{entityTree.entity."+fieldName+".entrySet()}", Map.class)); component.setVar("param"); UIColumn column = new UIColumn(); UIOutput label1 = DynamicHtmlComponentCreator.createHtmlOutputText("#{param[key]}", String.class); column.getChildren().add(label1); UIOutput label2 = DynamicHtmlComponentCreator.createHtmlOutputText("#{param[value]}", String.class); column.getChildren().add(label2); component.getChildren().add(column); ... return component; } component.getChildren().add(column); ... return component; } So, further the problem is, that this code only prints out the content of the Map, on another page I need the values displayed in HtmlInputText elements and the whole map updated if the user clicks a i.e. "Save" button. So, further the problem is, that this code only prints out the content of the Map, on another page I need the values displayed in HtmlInputText elements and the whole map updated if the user clicks a i.e. "Save" button. If there is a workaround, to represent the Map as to Lists...please help me, because for this (map as 2 lists) I've no idea how the underlying map/database model can be updated again. Hopefully, someone can help me....

    Read the article

  • How to stream XML data using XOM?

    - by Jonik
    Say I want to output a huge set of search results, as XML, into a PrintWriter or an OutputStream, using XOM. The resulting XML would look like this: <?xml version="1.0" encoding="UTF-8"?> <resultset> <result> [child elements and data] </result> ... ... [1000s of result elements more] </resultset> Because the resulting XML document could be big (tens or hundreds of megabytes, perhaps), I want to output it in a streaming fashion (instead of creating the whole Document in memory and then writing that). The granularity of outputting one <result> at a time is fine, so I want to generate one <result> after another, and write it into the stream. Assume there's already a method that helps with iterating the results and generating Element objects: public nu.xom.Element getNextResult(); So I'd simply like to do something like this pseudocode (automatic flushing enabled, so don't worry about that) : open stream/writer write declaration write start tag for <resultset> while more results: write next <result> element write end tag for <resultset> close stream/writer I've been looking at Serializer, but the necessary methods, writeStartTag(Element), writeEndTag(Element), write(DocType) are protected, not public! Is there no other way than to subclass Serializer to be able to use those methods, or to manually write the start and end tags directly into the stream as Strings, bypassing XOM altogether? (The latter wouldn't be too bad in this simple example, but in the general case it would get quite ugly.) Am I missing something or is XOM just not made for this? With dom4j I could do this easily using XMLWriter - it has constructors that take a Writer or OutputStream, and methods writeOpen(Element), writeClose(Element), writeDocType(DocumentType) etc. Compare to XOM's Serializer where the only public write method is the one that takes a whole Document. Please refrain from answering if you're not familiar with XOM! I specifically want to know if and how you can do this kind of streaming with that library. (This is related to my question about the best dom4j replacement where XOM is a strong contender.)

    Read the article

  • No exception, no error, still i dont recieve the json object from my http post

    - by user2978538
    My source code: final Thread t = new Thread() { public void run() { Looper.prepare(); HttpClient client = new DefaultHttpClient(); HttpConnectionParams.setConnectionTimeout(client.getParams(), 10000); HttpResponse response; JSONObject obj = new JSONObject(); try { HttpPost post = new HttpPost("http://pc.dyndns-office.com/mobile.asp"); obj.put("Model", ReadIn1); obj.put("Product", ReadIn2); obj.put("Manufacturer", ReadIn3); obj.put("RELEASE", ReadIn4); obj.put("SERIAL", ReadIn5); obj.put("ID", ReadIn6); obj.put("ANDROID_ID", ReadIn7); obj.put("Language", ReadIn8); obj.put("BOARD", ReadIn9); obj.put("BOOTLOADER", ReadIn10); obj.put("BRAND", ReadIn11); obj.put("CPU_API", ReadIn12); obj.put("DISPLAY", ReadIn13); obj.put("FINGERPRINT", ReadIn14); obj.put("HARDWARE", ReadIn15); obj.put("UUID", ReadIn16); StringEntity se = new StringEntity(obj.toString()); se.setContentType(new BasicHeader(HTTP.CONTENT_TYPE, "application/json")); post.setEntity(se); post.setHeader("host", "http://pc.dyndns-office.com/mobile.asp"); response = client.execute(post); if (response != null) { InputStream in = response.getEntity().getContent(); } } catch (Exception e) { e.printStackTrace(); } Looper.loop(); } }; t.start(); } } i want to send an Json object to a Website. As far as I can see, I set the header, but still I get this exception, can someone help me? (I'm using Android-Studio) __ Edit: i don't get any exceptions anymore, but still i do not receive the json packet. When i manually call the website i get a log file entry. Does anyone know, what's wrong? Edit2: When i debug i get as response "HTTP/1.1 400 bad request" i'm sure its not an permission problem. Any ideas?

    Read the article

  • Creating a spam list with a web crawler in python

    - by user313623
    Hey guys, I'm not trying to do anything malicious here, I just need to do some homework. I'm a fairly new programmer, I'm using python 3.0, and I having difficulty using recursion for problem-solving. I've been stuck on this question for quite a while. Here's the assignment: Write a recursive method spam(url, n) that takes a url of a web page as input and a non-negative integer n, collects all the email address contained in the web page and adds them to a global dictionary variable spam_dict, and then recursively calls itself on every http hyperlink contained in the web page. You will use a dictionary so only one copy of every email address is save; your dictionary will store (key,value) pairs (email, email). The recursive call should use the parameter n-1 instead of n. If n = 0, you should collect the email addresses but no recursive calls should be made. The parameter n is used to limit the recursion to at most depth n. You will need to use the solutions of the two above problems; you method spam() will call the methods links2() and emails() and possibly other functions as well. Notes: 1. running spam() directly will produce no output on the screen; to find your spam_dict, you will need to read the value of spam_dict, and you will also need to reset it to the empty dictionary before every run of spam. 2. Recall how global variables are used. Usage: spam_dict = {} spam('http://reed.cs.depaul.edu/lperkovic/csc242/test1.html',0) spam_dict.keys() dict_keys([]) spam_dict = {} spam('http://reed.cs.depaul.edu/lperkovic/csc242/test1.html',1) spam_dict.keys() dict_keys(['[email protected]', '[email protected]']) So far, I've written a function that traverses web pages and puts all the links in a nice little list, and what I wanted to do was call that functions. And why would I use recursion on a dictionary? And how? I don't understand how n ties into all of this. def links2(url): content = str(urlopen(url).read()) myparser = MyHTMLParser() myparser.feed(content) lst = myparser.get() mergelst = [] for link in lst: mergelst.append(urljoin(lst[0],link)) print(mergelst) Any input (except why spam is bad) would be greatly appreciated. Also, I realize that the above function could probably look better, if you have a way to do it, I'm all ears. However, all I need is the point is for the program to produce the proper output.

    Read the article

  • Why does the rename() syscall prohibit moving a directory that I can't write to a different director

    - by Daniel Papasian
    I am trying to understand why this design decision was made with the rename() syscall in 4.2BSD. There's nothing I'm trying to solve here, just understand the rationale for the behavior itself. 4.2BSD saw the introduction of the rename() syscall for the purpose of allowing atomic renames/moves of files. From 4.3BSD-Reno/src/sys/ufs/ufs_vnops.c: /* * If ".." must be changed (ie the directory gets a new * parent) then the source directory must not be in the * directory heirarchy above the target, as this would * orphan everything below the source directory. Also * the user must have write permission in the source so * as to be able to change "..". We must repeat the call * to namei, as the parent directory is unlocked by the * call to checkpath(). */ if (oldparent != dp->i_number) newparent = dp->i_number; if (doingdirectory && newparent) { VOP_LOCK(fndp->ni_vp); error = ufs_access(fndp->ni_vp, VWRITE, tndp->ni_cred); VOP_UNLOCK(fndp->ni_vp); So clearly this check was added intentionally. My question is - why? Is this behavior supposed to be intuitive? The effect of this is that one cannot move a directory (located in a directory that one can write) that one cannot write to another directory that one can write to atomically. You can, however, create a new directory, move the links over (assuming one has read access to the directory), and then remove one's write bit on the directory. You just can't do so atomically. % cd /tmp % mkdir stackoverflow-question % cd stackoverflow-question % mkdir directory-1 % mkdir directory-2 % mkdir directory-1/directory-i-cant-write % echo "foo" > directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write % mv directory-1/directory-i-cant-write directory-2 mv: rename directory-1/directory-i-cant-write to directory-2/directory-i-cant-write: Permission denied We now have a directory I can't write with contents I can't read that I can't move atomically. I can, however, achieve the same effect non-atomically by changing permissions, making the new directory, using ln to create the new links, and changing permissions. (Left as an exercise to the reader) . and .. are special cased already, so I don't particularly buy that it is intuitive that if I can't write a directory I can't "change .." which is what the source suggests. Is there any reason for this besides it being the perceived correct behavior by the author of the code? Is there anything bad that can happen if we let people atomically move directories (that they can't write) between directories that they can write?

    Read the article

  • Parallel.For maintain input list order on output list

    - by romeozor
    I'd like some input on keeping the order of a list during heavy-duty operations that I decided to try to do in a parallel manner to see if it boosts performance. (It did!) I came up with a solution, but since this was my first attempt at anything parallel, I'd need someone to slap my hands if I did something very stupid. There's a query that returns a list of card owners, sorted by name, then by date of birth. This needs to be rendered in a table on a web page (ASP.Net WebForms). The original coder decided he would construct the table cell-by-cell (TableCell), add them to rows (TableRow), then each row to the table. So no GridView, allegedly its performance is bad, but the performance was very poor regardless :). The database query returns in no time, the most time is spent on looping through the results and adding table cells etc. I made the following method to maintain the original order of the list: private TableRow[] ComposeRows(List<CardHolder> queryResult) { int queryElementsCount = queryResult.Count(); // array with the query's size var rowArray = new TableRow[queryElementsCount]; Parallel.For(0, queryElementsCount, i => { var row = new TableRow(); var cell = new TableCell(); // various operations, including simple ones such as: cell.Text = queryResult[i].Name; row.Cells.Add(cell); // here I'm adding the current item to it's original index // to maintain order in the output list rowArray[i] = row; }); return rowArray; } So as you can see, because I'm returning a very different type of data (List<CardHolder> -> TableRow[]), I can't just simply omit the ordering from the original query to do it after the operations. Also, I also thought it would be a good idea to Dispose() the objects at the end of each loop, because the query can return a huge list and letting cell and row objects pile up in the heap could impact performance.(?) How badly did I do? Does anyone have a better solution in case mine is flawed?

    Read the article

  • MacRuby + Interface Builder: How to display, then close, then display a window again

    - by Derick Bailey
    I'm a complete n00b with MacRuby and Cocoa, though I've got more than a year of Ruby experience, so keep that in mind when answering - I need lots of details and explanation. :) I've set up a simple project that has 2 windows in it, both of which are built with Interface Builder. The first window is a simple list of accounts using a table view. It has a "+" button below the table. When I click the + button, I want to show an "Add New Account" window. I also have an AccountsController < NSWindowController and a AddNewAccountController class, set up as the delegates for these windows, with the appropriate button click methods wired up, and outlets to reference the needed windows. When I click the "+" button in the Accounts window, I have this code fire: @add_account.center @add_account.display @add_account.makeKeyAndOrderFront(nil) @add_account.orderFrontRegardless this works great the first time I click the + button. Everything shows up, I'm able to enter my data and have it bind to my model. however, when I close the add new account form, things start going bad. if I set the add new account window to release on close, then the second time I click the + button, the window will still pop up but it's frozen. i can't click any buttons, enter any data, or even close the form. i assume this is because the form's code has been released, so there is no message loop processing the form... but i'm not entirely sure about this. if i set the add new account window to not release on close, then the second time i click the + button, the window shows up fine and it is usable - but it still has all the data that i had previously entered... it's still bound to my previous Account class instance. what am I doing wrong? what's the correct way to create a new instance of the Add New Account form, create a new Account model, bind that model to the form and show the form, when I click the + button on the Accounts form? ... this is all being done on OSX 10.6.6, 64bit, with XCode 3.2.4

    Read the article

  • Constructor versus setter injection

    - by Chris
    Hi, I'm currently designing an API where I wish to allow configuration via a variety of methods. One method is via an XML configuration schema and another method is through an API that I wish to play nicely with Spring. My XML schema parsing code was previously hidden and therefore the only concern was for it to work but now I wish to build a public API and I'm quite concerned about best-practice. It seems that many favor javabean type PoJo's with default zero parameter constructors and then setter injection. The problem I am trying to tackle is that some setter methods implementations are dependent on other setter methods being called before them in sequence. I could write anal setters that will tolerate themselves being called in many orders but that will not solve the problem of a user forgetting to set the appropriate setter and therefore the bean being in an incomplete state. The only solution I can think of is to forget about the objects being 'beans' and enforce the required parameters via constructor injection. An example of this is in the default setting of the id of a component based on the id of the parent components. My Interface public interface IMyIdentityInterface { public String getId(); /* A null value should create a unique meaningful default */ public void setId(String id); public IMyIdentityInterface getParent(); public void setParent(IMyIdentityInterface parent); } Base Implementation of interface: public abstract class MyIdentityBaseClass implements IMyIdentityInterface { private String _id; private IMyIdentityInterface _parent; public MyIdentityBaseClass () {} @Override public String getId() { return _id; } /** * If the id is null, then use the id of the parent component * appended with a lower-cased simple name of the current impl * class along with a counter suffix to enforce uniqueness */ @Override public void setId(String id) { if (id == null) { IMyIdentityInterface parent = getParent(); if (parent == null) { // this may be the top level component or it may be that // the user called setId() before setParent(..) } else { _id = Helpers.makeIdFromParent(parent,getClass()); } } else { _id = id; } } @Override public IMyIdentityInterface getParent() { return _parent; } @Override public void setParent(IMyIdentityInterface parent) { _parent = parent; } } Every component in the framework will have a parent except for the top level component. Using the setter type of injection, then the setters will have different behavior based on the order of the calling of the setters. In this case, would you agree, that a constructor taking a reference to the parent is better and dropping the parent setter method from the interface entirely? Is it considered bad practice if I wish to be able to configure these components using an IoC container? Chris

    Read the article

  • Write Scheme data structures so they can be eval-d back in, or alternative

    - by Jesse Millikan
    I'm writing an application (A juggling pattern animator) in PLT Scheme that accepts Scheme expressions as values for some fields. I'm attempting to write a small text editor that will let me "explode" expressions into expressions that can still be eval'd but contain the data as literals for manual tweaking. For example, (4hss->sexp "747") is a function call that generates a legitimate pattern. If I eval and print that, it becomes (((7 3) - - -) (- - (4 2) -) (- (7 2) - -) (- - - (7 1)) ((4 0) - - -) (- - (7 0) -) (- (7 2) - -) (- - - (4 3)) ((7 3) - - -) (- - (7 0) -) (- (4 1) - -) (- - - (7 1))) which can be "read" as a string, but will not "eval" the same as the function. For this statement, of course, what I need would be as simple as (quote (((7 3... but other examples are non-trivial. This one, for example, contains structs which print as vectors: pair-of-jugglers ; --> (#(struct:hand #(struct:position -0.35 2.0 1.0) #(struct:position -0.6 2.05 1.1) 1.832595714594046) #(struct:hand #(struct:position 0.35 2.0 1.0) #(struct:position 0.6 2.0500000000000003 1.1) 1.308996938995747) #(struct:hand #(struct:position 0.35 -2.0 1.0) #(struct:position 0.6 -2.05 1.1) -1.3089969389957472) #(struct:hand #(struct:position -0.35 -2.0 1.0) #(struct:position -0.6 -2.05 1.1) -1.8325957145940461)) I've thought of at least three possible solutions, none of which I like very much. Solution A is to write a recursive eval-able output function myself for a reasonably large subset of the values that I might be using. There (probably...) won't be any circular references by the nature of the data structures used, so that wouldn't be such a long job. The output would end up looking like `(((3 0) (... ; ex 1 `(,(make-hand (make-position ... ; ex 2 Or even worse if I could't figure out how to do it properly with quasiquoting. Solution B would be to write out everything as (read (open-input-string "(big-long-s-expression)")) which, technically, solves the problem I'm bringing up but is... ugly. Solution C might be a different approach of giving up eval and using only read for parsing input, or an uglier approach where the s-expression is used as directly data if eval fails, but those both seem unpleasant compared to using scheme values directly. Undiscovered Solution D would be a PLT Scheme option, function or library I haven't located that would match Solution A. Help me out before I start having bad recursion dreams again.

    Read the article

  • Do Websites need Local Databases Anymore?

    - by viatropos
    If there's a better place to ask this, please let me know. Every time I build a new website/blog/shopping-cart/etc., I keep trying to do the following: Extract out common functionality into reusable code (Rubygems and jQuery plugins mostly) If possible, convert that gem into a small service so I never have to deal with a database for the objects involved (by service, I mean something lean and mean, usually built with the Sinatra Web Framework with a few core models). My assumption is, if I can remove dependencies on local databases, that will make it easier and more scalable in the long run (scalable in terms of reusability and manageability, not necessarily database/performance). I'm not sure if that's a good or bad assumption yet. What do you think? I've made this assumption because of the following reason: Most serious database/model functionality has been built on the internet somewhere. Just to name a few: Social Network API: Facebook Messaging API: Twitter Mailing API: Google Event API: Eventbrite Shopping API: Shopify Comment API: Disqus Form API: Wufoo Image API: Picasa Video API: Youtube ... Each of those things are fairly complicated to build from scratch and to make as optimized, simple, and easy to use as those companies have made them. So if I build an app that shows pictures (picasa) on an Event page (eventbrite), and you can see who joined the event (facebook events), and send them emails (google apps api), and have them fill out monthly surveys (wufoo), and watch a video when they're done (youtube), all integrated into a custom, easy to use website, and I can do that without ever creating a local database, is that a good thing? I ask because there's two things missing from the puzzle that keep forcing me to create that local database: Post API RESTful/Pretty Url API While there's plenty of Blogging systems and APIs for them, there is no one place where you can just write content and have it part of some massive thing. For every app, I have to use code for creating pretty/restful urls, and that saves posts. But it seems like that should be a service! Question is, is that what the website is? ...That place to integrate the worlds services for my specific cause... and, sigh, to store posts that only my site has access to. Will everyone always need "their own blog"? Why not just have a profile and write lots of content on an established platform like StackOverflow or Facebook? ... That way I can write apps entirely without a database and know that I'm doing it right. Note: Of course at some point you'd need a database, if you were doing something unique or new. But for the case where you're just rewiring information or creating things like videos, events, and products, is it really necessary anymore??

    Read the article

  • How to configure the framesize using AudioUnit.framework on iOS

    - by Piperoman
    I have an audio app i need to capture mic samples to encode into mp3 with ffmpeg First configure the audio: /** * We need to specifie our format on which we want to work. * We use Linear PCM cause its uncompressed and we work on raw data. * for more informations check. * * We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz */ AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = SAMPLE_RATE; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8; audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame*sizeof(SInt16); audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame*sizeof(SInt16); The recording callback is: static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { NSLog(@"Log record: %lu", inBusNumber); NSLog(@"Log record: %lu", inNumberFrames); NSLog(@"Log record: %lu", (UInt32)inTimeStamp); // the data gets rendered here AudioBuffer buffer; // a variable where we check the status OSStatus status; /** This is the reference to the object who owns the callback. */ AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon; /** on this point we define the number of channels, which is mono for the iphone. the number of frames is usally 512 or 1024. */ buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size buffer.mNumberChannels = 1; // one channel buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size // we put our buffer into a bufferlist array for rendering AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0] = buffer; // render input and check for error status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList); [audioProcessor hasError:status:__FILE__:__LINE__]; // process the bufferlist in the audio processor [audioProcessor processBuffer:&bufferList]; // clean up the buffer free(bufferList.mBuffers[0].mData); //NSLog(@"RECORD"); return noErr; } With data: inBusNumber = 1 inNumberFrames = 1024 inTimeStamp = 80444304 // All the time same inTimeStamp, this is strange However, the framesize that i need to encode mp3 is 1152. How can i configure it? If i do buffering, that implies a delay, but i would like to avoid this because is a real time app. If i use this configuration, each buffer i get trash trailing samples, 1152 - 1024 = 128 bad samples. All samples are SInt16.

    Read the article

  • Send/cache multidimensional array to php with jQuery

    - by robertdd
    hy, i have a little, big problem here :) after i upload some images i get a list with all the images. I have some jQuery function for rotate, duplicate, delete, shuffle images! when i select a image and hit delete i send a post to php with the alt="" value of the image,i identify the picture and edit. I want to make a save button, Instead of sending a post every time i rotate a image, better send a post after editing the list of images with an array that contains all data? my php array after upload looks like this: [files] => Array ( [lcxkijgr] => lcxkijgr.jpg [xcewxpfv] => xcewxpfv.jpg [rtiurwxf] => rtiurwxf.jpg [gsbxdsdc] => gsbxdsdc.jpg ) say that I uploaded 4 pictures, firs picture i rotate 90 degrees second i want to duplicate third i rotate 270 degrees and the fourth image i delete i can do all this only with jQuery, but on the server the images are the same, after a refresh the images are the same this is the list with the images: <div class="upimage"> <ul id="upimagesQueue"> <li id="upimagesHPVEJM"> <a href="javascript:jQuery('#upimagesHPVEJM').showlargeimage('HPVEJM')"> <img alt="lcxkijgr" src="uploads/s6id9r9icnp8q9102h8md9kfd7/lcxkijgr.jpg?1272087830477" id="HPVEJM" style="display: block;" > </a> </li> <li id="upimagesSTCSAV"> <a href="javascript:jQuery('#upimagesSTCSAV').showlargeimage('STCSAV')"> <img alt="xcewxpfv" src="uploads/s6id9r9icnp8q9102h8md9kfd7/xcewxpfv.jpg?1272087831360" id="STCSAV" style="display: block;" > </a> </li> <li id="upimagesBFPUEQ"> <a href="javascript:jQuery('#upimagesBFPUEQ').showlargeimage('BFPUEQ')"> <img alt="rtiurwxf" src="uploads/s6id9r9icnp8q9102h8md9kfd7/rtiurwxf.jpg?1272087832162" id="BFPUEQ" style="display: block;" > </a> </li> <li id="upimagesRKXNSV"> <a href="javascript:jQuery('#upimagesRKXNSV').showlargeimage('RKXNSV')"> <img alt="gsbxdsdc" src="uploads/s6id9r9icnp8q9102h8md9kfd7/gsbxdsdc.jpg?1272087832957" id="RKXNSV" style="display: block;"> </a> </li> <ul> </div> is ok if i make one array like this: array{ imgFromLi = array(img1,img2,img3,img4,img5,img6) rotate = array{img1=90, img2=270, img3=90} delete = array{img4,img5,img6} duplicate = array{img2, img3} } how i can send/cache this array?? sorry for my very bad english

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • Using IoC and Dependency Injection, how do I wrap an existing implementation with a new layer of imp

    - by Dividnedium
    I'm trying to figure out how this would be done in practice, so as not to violate the Open Closed principle. Say I have a class called HttpFileDownloader that has one function that takes a url and downloads a file returning the html as a string. This class implements an IFileDownloader interface which just has the one function. So all over my code I have references to the IFileDownloader interface and I have my IoC container returning an instance of HttpFileDownloader whenever an IFileDownloader is Resolved. Then after some use, it becomes clear that occasionally the server is too busy at the time and an exception is thrown. I decide that to get around this, I'm going to auto-retry 3 times if I get an exception, and wait 5 seconds in between each retry. So I create HttpFileDownloaderRetrier which has one function that uses HttpFileDownloader in a for loop with max 3 loops, and a 5 second wait between each loop. So that I can test the "retry" and "wait" abilities of the HttpFileDownloadRetrier I have the HttpFileDownloader dependency injected by having the HttpFileDownloaderRetrier constructor take an IFileDownloader. So now I want all Resolving of IFileDownloader to return the HttpFileDownloaderRetrier. But if I do that, then HttpFileDownloadRetrier's IFileDownloader dependency will get an instance of itself and not of HttpFileDownloader. So I can see that I could create a new interface for HttpFileDownloader called IFileDownloaderNoRetry, and change HttpFileDownloader to implement that. But that means I'm changing HttpFileDownloader, which violates Open Closed. Or I could implement a new interface for HttpFileDownloaderRetrier called IFileDownloaderRetrier, and then change all my other code to refer to that instead of IFileDownloader. But again, I'm now violating Open Closed in all my other code. So what am I missing here? How do I wrap an existing implementation (downloading) with a new layer of implementation (retrying and waiting) without changing existing code? Here's some code if it helps: public interface IFileDownloader { string Download(string url); } public class HttpFileDownloader : IFileDownloader { public string Download(string url) { //Cut for brevity - downloads file here returns as string return html; } } public class HttpFileDownloaderRetrier : IFileDownloader { IFileDownloader fileDownloader; public HttpFileDownloaderRetrier(IFileDownloader fileDownloader) { this.fileDownloader = fileDownloader; } public string Download(string url) { Exception lastException = null; //try 3 shots of pulling a bad URL. And wait 5 seconds after each failed attempt. for (int i = 0; i < 3; i++) { try { fileDownloader.Download(url); } catch (Exception ex) { lastException = ex; } Utilities.WaitForXSeconds(5); } throw lastException; } }

    Read the article

  • Understanding SingleTableEntityPersister n QueryLoader

    - by Iapilgrim
    Hi, I have the Hibernate model @Cache(usage = CacheConcurrencyStrategy.NONE, region = SitesConstants.CACHE_REGION) public class Node extends StatefulEntity implements Inheritable, Cloneable { private Node _parent; private List<Node> _childNodes; .. } @Cache(usage = CacheConcurrencyStrategy.NONE, region = SitesConstants.CACHE_REGION) public class Page extends Node implements Defaultable, Securable { private RootZone _rootZone; ...... @OneToOne(fetch = FetchType.LAZY) @JoinColumn(name = "root_zone_id", insertable = false, updatable = false) public RootZone getRootZone() { return _rootZone; } public void setRootZone(RootZone rootZone) { if (rootZone != null) { rootZone.setPageId(this.getId()); _rootZone = rootZone; } } I want to get all pages ( call getSiteTree), so I using this query String hpql = "SELECT n FROM Node n "; See the trace I find Page.setRootZone(RootZone) line: 155 NativeMethodAccessorImpl.invoke0(Method, Object, Object[]) line: not available [native method] NativeMethodAccessorImpl.invoke(Object, Object[]) line: 39 DelegatingMethodAccessorImpl.invoke(Object, Object[]) line: 25 Method.invoke(Object, Object...) line: 597 BasicPropertyAccessor$BasicSetter.set(Object, Object, SessionFactoryImplementor) line: 66 PojoEntityTuplizer(AbstractEntityTuplizer).setPropertyValues(Object, Object[]) line: 352 PojoEntityTuplizer.setPropertyValues(Object, Object[]) line: 232 SingleTableEntityPersister(AbstractEntityPersister).setPropertyValues(Object, Object[], EntityMode) line: 3580 TwoPhaseLoad.initializeEntity(Object, boolean, SessionImplementor, PreLoadEvent, PostLoadEvent) line: 152 QueryLoader(Loader).initializeEntitiesAndCollections(List, Object, SessionImplementor, boolean) line: 877 QueryLoader(Loader).doQuery(SessionImplementor, QueryParameters, boolean) line: 752 QueryLoader(Loader).doQueryAndInitializeNonLazyCollections(SessionImplementor, QueryParameters, boolean) line: 259 QueryLoader(Loader).doList(SessionImplementor, QueryParameters) line: 2232 QueryLoader(Loader).listIgnoreQueryCache(SessionImplementor, QueryParameters) line: 2129 QueryLoader(Loader).list(SessionImplementor, QueryParameters, Set, Type[]) line: 2124 QueryLoader.list(SessionImplementor, QueryParameters) line: 401 QueryTranslatorImpl.list(SessionImplementor, QueryParameters) line: 363 HQLQueryPlan.performList(QueryParameters, SessionImplementor) line: 196 SessionImpl.list(String, QueryParameters) line: 1149 QueryImpl.list() line: 102 QueryImpl.getResultList() line: 67 NodeDaoImpl.getSiteTree(long) line: 358 PageNodeServiceImpl.getSiteTree(long) line: 797 NativeMethodAccessorImpl.invoke0(Method, Object, Object[]) line: not available [native method] NativeMethodAccessorImpl.invoke(Object, Object[]) line: 39 DelegatingMethodAccessorImpl.invoke(Object, Object[]) line: 25 Method.invoke(Object, Object...) line: 597 AopUtils.invokeJoinpointUsingReflection(Object, Method, Object[]) line: 307 JdkDynamicAopProxy.invoke(Object, Method, Object[]) line: 198 $Proxy100.getSiteTree(long) line: not available the calling setRootZone in Page makes Hibernate issue a hit to database. I don't want this. So my question is + Why query String hpql = "SELECT n FROM Node n "; issues un-expected trace logs like above. Why the query String hpql = "SELECT n.nodename FROM Node n " not? What is the mechanism behind? Note: Im using hibernate caching level 2. In case I don't want to see that trace logs. I mean I just get Node data only. How to do ? Thanks for your help. Sorry for my bad english :( Van

    Read the article

  • Would a Centralized Blogging Service Work?

    - by viatropos
    If there's a better place to ask this, please let me know. Every time I build a new website/blog/shopping-cart/etc., I keep trying to do the following: Extract out common functionality into reusable code (Rubygems and jQuery plugins mostly) If possible, convert that gem into a small service so I never have to deal with a database for the objects involved (by service, I mean something lean and mean, usually built with the Sinatra Web Framework with a few core models. My assumption is, if I can remove dependencies on local databases, that will make it easier and more scalable in the long run (scalable in terms of reusability and manageability, not necessarily database/performance). I'm not sure if that's a good or bad assumption yet. What do you think? I've made this assumption because of the following reason: Most serious database/model functionality has been built on the internet somewhere. Just to name a few: Social Network API: Facebook Messaging API: Twitter Mailing API: Google Event API: Eventbrite Shopping API: Shopify Comment API: Disqus Form API: Wufoo Image API: Picasa Video API: Youtube ... Each of those things are fairly complicated to build from scratch and to make as optimized, simple, and easy to use as those companies have. So if I build an app that shows pictures (picasa) on an Event page (eventbrite), and you can see who joined the event (facebook events), and send them emails (google apps api), and have them fill out monthly surveys (wufoo), and watch a video when they're done (youtube), all integrated into a custom, easy to use website, and I can do that without ever creating a local database, is that a good thing? I ask because there's two things missing from the puzzle that keep forcing me to create that local database: Post API RESTful/Pretty Url API While there's plenty of Blogging systems and APIs for them, there is no one place where you can just write content and have it part of some massive thing. For every app, I have to use code for creating pretty/restful urls, and that saves posts. But it seems like that should be a service! Question is, is that the main point of a website? Will everyone always need "their own blog"? Why not just have a profile and write lots of content on an established platform like StackOverflow or Facebook?

    Read the article

  • Dynamic data-entry value store

    - by simendsjo
    I'm creating a data-entry application where users are allowed to create the entry schema. My first version of this just created a single table per entry schema with each entry spanning a single or multiple columns (for complex types) with the appropriate data type. This allowed for "fast" querying (on small datasets as I didn't index all columns) and simple synchronization where the data-entry was distributed on several databases. I'm not quite happy with this solution though; the only positive thing is the simplicity... I can only store a fixed number of columns. I need to create indexes on all columns. I need to recreate the table on schema changes. Some of my key design criterias are: Very fast querying (Using a simple domain specific query language) Writes doesn't have to be fast Many concurrent users Schemas will change often Schemas might contain many thousand columns The data-entries might be distributed and needs syncronization. Preferable MySQL and SQLite - Databases like DB2 and Oracle is out of the question. Using .Net/Mono I've been thinking of a couple of possible designs, but none of them seems like a good choice. Solution 1: Union like table containing a Type column and one nullable column per type. This avoids joins, but will definitly use a lot of space. Solution 2: Key/value store. All values are stored as string and converted when needed. Also use a lot of space, and of course, I hate having to convert everything to string. Solution 3: Use an xml database or store values as xml. Without any experience I would think this is quite slow (at least for the relational model unless there is some very good xpath support). I also would like to avoid an xml database as other parts of the application fits better as a relational model, and being able to join the data is helpful. I cannot help to think that someone has solved (some of) this already, but I'm unable to find anything. Not quite sure what to search for either... I know market research is doing something like this for their questionnaires, but there are few open source implementations, and the ones I've found doesn't quite fit the bill. PSPP has much of the logic I'm thinking of; primitive column types, many columns, many rows, fast querying and merging. Too bad it doesn't work against a database.. And of course... I don't need 99% of the provided functionality, but a lot of stuff not included. I'm not sure this is the right place to ask such a design related question, but I hope someone here has some tips, know of any existing work, or can point me to a better place to ask such a question. Thanks in advance!

    Read the article

  • Javascript HTML and Script injection issue in IE

    - by MartinHN
    Hi I have a javascript variable containing escaped HTML. There can be script tags inside the HTML, like this: var valueToInsert = "%3Cscript%20type%3D%22text/javascript%22%3Ealert%28%27test%27%29%3B%3C/script%3E%0A%3Cscript%20type%3D%22text/javascript%22%20src%3D%22http%3A//devserver/testinclude.js%22%3E%3C/script%3E%0A%3Cimg%20src%3D%22http%3A//www.footballpictures.net/data/media/131/manchester_united_logo.jpg%22%20/%3E" I want to append this to the DOM, and get all the javascript fired as expected. Right now I'm using this approach: var div = document.createElement("div"); div.innerHTML = unescape(valueToInsert); document.body.appendChild(div); In IE, at the time i set div.innerHTML - all script tags are removed. If I use jQuery to and do this: $(document.body).append(valueToInsert) It all works fine. Bad thing is, that I cannot use jQuery as this code will be added to sites I'm not in control of using some "already-implemented" script includes. Does someone have a trick? If jQuery can do it, it must be possible? I had another issue in Opera. I changed the injection script to be this: (still doesn't work in IE) var div = document.createElement("div"); div.innerHTML = unescape(valueToInsert); var a = new Array(); for (var i = 0; i < div.childNodes.length; i++) a.push(div.childNodes[i]); for (var i = 0; i < a.length; i++) { if (a[i].nodeName == "SCRIPT" && a[i].getAttribute("src") != null && a[i].getAttribute("src") != "" && typeof (a[i].getAttribute("src")) != "undefined") { var scriptTag = document.createElement("script"); scriptTag.src = a[i].getAttribute("src"); scriptTag.type = "text/javascript"; document.body.appendChild(scriptTag); } else if (a[i].nodeName == "SCRIPT") { eval(a[i].innerHTML); } else { document.body.appendChild(a[i]); } }

    Read the article

  • UTF-8 GET using Indy 10.5.8.0 and Delphi XE2

    - by Bogdan Botezatu
    I'm writing my first Unicode application with Delphi XE2 and I've stumbled upon an issue with GET requests to an Unicode URL. Shortly put, it's a routine in a MP3 tagging application that takes a track title and an artist and queries Last.FM for the corresponding album, track no and genre. I have the following code: function GetMP3Info(artist, track: string) : TMP3Data //<---(This is a record) var TrackTitle, ArtistTitle : WideString; webquery : WideString; [....] WebQuery := UTF8Encode('http://ws.audioscrobbler.com/2.0/?method=track.getcorrection&api_key=' + apikey + '&artist=' + artist + '&track=' + track); //[processing the result in the web query, getting the correction for the artist and title] // eg: for artist := Bucovina and track := Mestecanis, the corrected values are //ArtistTitle := Bucovina; // TrackTitle := Mestecani?; //Now here is the tricky part: webquery := UTF8Encode('http://ws.audioscrobbler.com/2.0/?method=track.getInfo&api_key=' + apikey + '&artist=' + unescape(ArtistTitle) + '&track=' + unescape(TrackTitle)); //the unescape function replaces spaces (' ') with '+' to comply with the last.fm requests [some more processing] end; The webquery looks in a TMemo just right (http://ws.audioscrobbler.com/2.0/?method=track.getInfo&api_key=e5565002840xxxxxxxxxxxxxx23b98ad&artist=Bucovina&track=Mestecani?) Yet, when I try to send a GET() to the webquery using IdHTTP (with the ContentEncoding property set to 'UTF-8'), I see in Wireshark that the component is GET-ing the data to the ANSI value '/2.0/?method=track.getInfo&api_key=e5565002840xxxxxxxxxxxxxx23b98ad&artist=Bucovina&track=Mestec?ni?' Here is the full headers for the GET requests and responses: GET /2.0/?method=track.getInfo&api_key=e5565002840xxxxxxxxxxxxxx23b98ad&artist=Bucovina&track=Mestec?ni? HTTP/1.1 Content-Encoding: UTF-8 Host: ws.audioscrobbler.com Accept: text/html, */* Accept-Encoding: identity User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.23) Gecko/20110920 Firefox/3.6.23 SearchToolbar/1.22011-10-16 20:20:07 HTTP/1.0 400 Bad Request Date: Tue, 09 Oct 2012 20:46:31 GMT Server: Apache/2.2.22 (Unix) X-Web-Node: www204 Access-Control-Allow-Origin: * Access-Control-Allow-Methods: POST, GET, OPTIONS Access-Control-Max-Age: 86400 Cache-Control: max-age=10 Expires: Tue, 09 Oct 2012 20:46:42 GMT Content-Length: 114 Connection: close Content-Type: text/xml; charset=utf-8; <?xml version="1.0" encoding="utf-8"?> <lfm status="failed"> <error code="6"> Track not found </error> </lfm> The question that puzzles me is am I overseeing anything related to setting the property of the tidhttp control? How can I stop the well-formated URL i'm composing in the application from getting wrongfully sent to the server? Thanks.

    Read the article

  • how to refresh list of photos after uploadify jquery

    - by robert
    hello, I want to do a sort of photo editor, i use uploadify to upload images here are my files: http://www.mediafire.com/?nddjyzyygj5 the problem is: after upload the images i dinamicaly generate a thumb .. When I click on it it show me the large pictures in another page, i want to show the picture in a div or a paragraf! after I refresh the page is working! why? from my php script i recive only the image name (response) after UploadifyComplete i append this: jQuery("#" + jQuery(this).attr('id') + ID).html('<a href="uploads/' + response + '"><img width="60px" height="60px" src="uploads/' + response + '" alt="' + response + '" /></a>'); to this: jQuery(queue).append('<li class="uploadifyQueueItem">\ <span class="fileName">' + fileName + ' (' + byteSize + suffix + ')</span>\ <div class="uploadifyProgress">\ <div id="' + jQuery(this).attr('id') + ID + 'ProgressBar" class="uploadifyProgressBar"><!--Progress Bar--></div>\ </div>\ </li>'); } and the result will be: <div class="uploadifyQueue"> <ul id="mainftpQueue"> <li class="uploadifyQueueItem"> <a href="uploads/Winter.jpg"><img height="60px" width="60px" alt="Winter.jpg" src="uploads/Winter.jpg"></a> </li> </ul> </div> i putt all the images in 1 php array after all images uploaded i want to refresh the div where this code is: <div class="uploadifyQueue"> <?php if ($_SESSION['files']){ print '<ul id="mainftpQueue">'."\n"; foreach($_SESSION['files'] as $image ): print '<li class="uploadifyQueueItem">'."\n"; print '<a href="uploads/'.$image.'"><img height="60px" width="60px" alt="'.$image.'" src="uploads/'.$image.'"></a>'."\n"; print "</li>\n"; endforeach; print "</ul>\n"; } ?> </div> i try with: $('#mainftpQueue').load(location.href+" #mainftpQueue>*",""); $('#mainftpQueue').load("/ #mainftpQueue li"); buth no succes sorry 4 my bad english.. if any 1 can edit this thanks

    Read the article

  • Performance tuning of a Hibernate+Spring+MySQL project operation that stores images uploaded by user

    - by Umar
    Hi I am working on a web project that is Spring+Hibernate+MySQL based. I am stuck at a point where I have to store images uploaded by a user into the database. Although I have written some code that works well for now, but I believe that things will mess up when the project would go live. Here's my domain class that carries the image bytes: @Entity public class Picture implements java.io.Serializable{ long id; byte[] data; ... // getters and setters } And here's my controller that saves the file on submit: public class PictureUploadFormController extends AbstractBaseFormController{ ... protected ModelAndView onSubmit(HttpServletRequest request, HttpServletResponse response, Object command, BindException errors) throws Exception{ MutlipartFile file; // getting MultipartFile from the command object ... // beginning hibernate transaction ... Picture p=new Picture(); p.setData(file.getBytes()); pictureDAO.makePersistent(p); // this method simply calls getSession().saveOrUpdate(p) // committing hiernate transaction ... } ... } Obviously a bad piece of code. Is there anyway I could use InputStream or Blob to save the data, instead of first loading all the bytes from the user into the memory and then pushing them into the database? I did some research on hibernate's support for Blob, and found this in Hibernate In Action book: java.sql.Blob and java.sql.Clob are the most efficient way to handle large objects in Java. Unfortunately, an instance of Blob or Clob is only useable until the JDBC transaction completes. So if your persistent class defines a property of java.sql.Clob or java.sql.Blob (not a good idea anyway), you’ll be restricted in how instances of the class may be used. In particular, you won’t be able to use instances of that class as detached objects. Furthermore, many JDBC drivers don’t feature working support for java.sql.Blob and java.sql.Clob. Therefore, it makes more sense to map large objects using the binary or text mapping type, assuming retrieval of the entire large object into memory isn’t a performance killer. Note you can find up-to-date design patterns and tips for large object usage on the Hibernate website, with tricks for particular platforms. Now apparently the Blob cannot be used, as it is not a good idea anyway, what else could be used to improve the performance? I couldn't find any up-to-date design pattern or any useful information on Hibernate website. So any help/recommendations from stackoverflowers will be much appreciated. Thanks

    Read the article

  • transforming binary data using ssis and sql server 2008

    - by Rick
    Hello All - I have a task to import/transform and extract zipped binary files that contain both text data as well as embeded binary data. Within the data is data that is relational in nature and needs to be processed into a defined database structure. Currently I have a C# single threaded app that essentially grabs all the files from the directory (currently there is 13K files of varying sizes) and extracts the data on a single thread line by line inserts to the database. As you could imagine this is a very slow process and unacceptable. There are several different parsing routines used depending on the header record in the file. There are potentially upto a million rows per file when all the data is extracted to the row level of detail. Follow on task is to parse those rows into their appropriate tables based on is content. i.e. the textual content has to be parsed further into "buckets" of like data in the database. That about sums up the big picture. Now for the problem task list. How do i iterate through a packet of data using SSIS? In the app the file is decompressed and then is parsed using streams data type and byte arrays and is routed to the required parsing routine based on the header data of each packet. There is bit swapping involved as well. Should i wrap up the app code into a script task(s) and let it do the custom processing? The data is seperated by year and the sql server tables is partitioned by year as well. I need to be able to "catch" bad file data as well and process by hand most likely. Should i simply load the zipped file to sql as a blob and parse the file with T-SQL? Would that be multi threaded if done that way? Not sure how to do the parsing in tsql that is involved here. Which do you think would be faster? Potentially the data that is currently processed via files could come to us via a socket. Can SSIS collect that data in real time? How would i go about setting that up? Processing these new files from the directorys will become a daily task. I can manage the data once i get it to sql server. Getting it there in a timely fashion seems to be the long pole in the tent for me. I would appreciate any comments or suggestions from the group. Rick

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >