Search Results

Search found 9916 results on 397 pages for 'counting sort'.

Page 314/397 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • Jquery with multi level json data array

    - by coder
    var data = [{"Address":{"Address":"4 Selby Road\nHowden","AddressId":"1414449","AddressLine1":"4 Selby Road","AddressLine2":"Howden","ContactId":"14248844","County":"North Humberside","Country":"UK","Postcode":"DN14 7JW","Town":"GOOLE","FullAddress":"4 Selby Road\nHowden\r\nGOOLE\r\nNorth Humberside\r\nDN14 7JW\r\nUnited Kingdom"},"ContactId":14248844,"Title":"Mrs","FirstName":"","Surname":"Neild","FullName":" Neild","PostCode":"DN14 7JW"},{"Address":{"Address":"466 Manchester Road\nBlackrod","AddressId":"1669615","AddressLine1":"466 Manchester Road","AddressLine2":"Blackrod","ContactId":"16721687","County":"","Country":"UK","Postcode":"BL6 5SU","Town":"BOLTON","FullAddress":"466 Manchester Road\nBlackrod\r\nBOLTON\r\nBL6 5SU\r\nUnited Kingdom"},"ContactId":16721687,"Title":"Miss","FirstName":"Andrea","Surname":"Neild","FullName":"Andrea Neild","PostCode":"BL6 5SU"},{"Address":{"Address":"5 Prospect Vale\nHeald Green","AddressId":"2127294","AddressLine1":"5 Prospect Vale","AddressLine2":"Heald Green","ContactId":"21178752","County":"Cheshire","Country":"UK","Postcode":"SK8 3RJ","Town":"CHEADLE","FullAddress":"5 Prospect Vale\nHeald Green\r\nCHEADLE\r\nCheshire\r\nSK8 3RJ\r\nUnited Kingdom"},"ContactId":21178752,"Title":"Mrs","FirstName":"","Surname":"Neild","FullName":" Neild","PostCode":"SK8 3RJ"}]; I'm tring to retrieve above json fommated data in jquery as below: var source = { localdata: data, sort: customsortfunc, datafields: [ { name: 'Surname', type: 'string' }, { name: 'FirstName', type: 'string' }, { name: 'Title', type: 'string' }, { name: 'Address.Address', type: 'string' } ], datatype: "array" }; var dataAdapter = new $.jqx.dataAdapter(source); $("#jqxgrid").jqxGrid( { width: 670, source: dataAdapter, theme: theme, sortable: true, pageable: true, autoheight: true, ready: function () { //$("#jqxgrid").jqxGrid('sortby', 'firstname', 'asc'); $("#jqxgrid").jqxGrid('sortby', 'FirstName', 'asc'); }, columns: [ { text: 'Title', datafield: 'Title', width: 100 }, { text: 'First Name', datafield: 'FirstName', width: 100 }, { text: 'Last Name', datafield: 'Surname', width: 100 }, { text: 'Address', datafield: 'Address.Address', width: 100 }, ] }); The only issue is there is no display for "Address.Adress". Can anyone advise me ?

    Read the article

  • Alternative to array_shift function

    - by SoLoGHoST
    Ok, I need keys to be preserved within this array and I just want to shift the 1st element from this array. Actually I know that the first key of this array will always be 1 when I do this: // Sort it by 1st group and 1st layout. ksort($disabled_sections); foreach($disabled_sections as &$grouplayout) ksort($grouplayout); Basically I'd rather not have to ksort it in order to grab this array where the key = 1. And, honestly, I'm not a big fan of array_shift, it just takes to long IMO. Is there another way. Perhaps a way to extract the entire array where $disabled_sections[1] is found without having to do a foreach and sorting it, and array_shift. I just wanna add $disabled[1] to a different array and remove it from this array altogether. While keeping both arrays keys structured the way they are. Technically, it would even be fine to do this: $array = array(); $array = $disabled_sections[1]; But it needs to remove it from $disabled_sections. Can I use something like this approach... $array = array(); $array = $disabled_sections[1]; $disabled_sections -= $disabled_sections[1]; Is something like the above even possible?? Thanks.

    Read the article

  • What AOP tools exist for doing aspect-oriented programming at the assembly language level against x8

    - by JohnnySoftware
    Looking for a tool I can use to do aspect-oriented programming at the assembly language level. For experimentation purposes, I would like the code weaver to operate native application level executable and dynamic link libraries. I have already done object-oriented AOP. I know assembly language for x86 and so forth. I would like to be able to do logging and other sorts of things using the familiar before/after/around constructs. I would like to be able to specify certain instructions or sequences/patterns of consecutive instructions as what to do a pointcut on since assembly/machine language is not exactly the most semantically rich computer language on the planet. If debugger and linker symbols are available, naturally, I would like to be able to use them to identify subroutines' entry points , branch/call/jump target addresses, symbolic data addresses, etc. I would like the ability to send notifications out to other diagnostic tools. Thus, support for sending data through connection-oriented sockets and datagrams is highly desirable. So is normal logging to files, UI, etc. This can be done using the action part of an aspect to make a function call, but then there are portability issues so the tool needs to support a flexible, well-abstracted logging/notifying mechanism with a clean, simple yet flexible. The goal is rapid-QA. The idea is to be able to share aspect source code braodly within communties as well as publicly. So, there needs to be a declarative security policy file that users can share. This insures that nothing untoward that is hidden directly or indirectly in an aspect source file slips by the execution manager. The policy file format needs to be simple to read, write, modify, understand, type-in, edit, and generate. Sort of like Java .policy files. Think the exact opposite of anything resembling XML Schema files and you get the idea. Is there such a tool in existence already?

    Read the article

  • structured vs. unstructured data in db

    - by Igor
    the question is one of design. i'm gathering a big chunk of performance data with lots of key-value pairs. pretty much everything in /proc/cpuinfo, /proc/meminfo/, /proc/loadavg, plus a bunch of other stuff, from several hundred hosts. right now, i just need to display the latest chunk of data in my UI. i will probably end up doing some analysis of the data gathered to figure out performance problems down the road, but this is a new application so i'm not sure what exactly i'm looking for performance-wise just yet. i could structure the data in the db -- have a column for each key i'm gathering. the table would end up being O(100) columns wide, it would be a pain to put into the db, i would have to add new columns if i start gathering a new stat. but it would be easy to sort/analyze the data just using SQL. or i could just dump my unstructured data blob into the table. maybe three columns -- host id, timestamp, and a serialized version of my array, probably using JSON in a TEXT field. which should I do? am i going to be sorry if i go with the unstructured approach? when doing analysis, should i just convert the fields i'm interested in and create a new, more structured table? what are the trade-offs i'm missing here?

    Read the article

  • Masking FLV video in AS3 with PNG alpha channel.

    - by James Roberts
    Hey there, I'm trying to mask an FLV with a PNG alpha channel. I'm using BitmapData (from a PNG) but it's not working. Is there anything I'm missing? Cut up code below: var musclesLoader:Loader = new Loader(); var musclesContainer:Sprite = new Sprite(); var musclesImage:Bitmap = new Bitmap(); var musclesBitmapData:BitmapData; var musclesVideo:Video = new Video(752, 451.2); var connection:NetConnection = new NetConnection(); var stream:NetStream; function loadMuscles():void { musclesLoader.load(new URLRequest('img/muscles.png')); musclesLoader.contentLoaderInfo.addEventListener(Event.COMPLETE, musclesComplete); } function musclesComplete():void { musclesBitmapData = new BitmapData(musclesLoader.content.width, musclesLoader.content.height, true, 0x000000); musclesImage.bitmapData = musclesBitmapData; musclesImage.smoothing = true; musclesContainer.addChild(musclesImage); contentContainer.addChild(musclesContainer); } function loadMusclesVideo():void { connection.connect(null); stream = new NetStream(connection); stream.client = this; musclesVideo.mask = musclesBitmapData; stage.addChild(musclesVideo); musclesVideo.attachNetStream(stream); stream.bufferTime = 1; stream.receiveAudio(true); stream.receiveVideo(true); stream.play("vid/muscles.flv"); } Outside this code I have a function that adds the containers to the stage, etc and places the objects in the appropriate spots. It sort of works - the mask applies, but in a square (the size of the boundaries of musclesBitmapData) rather than with the shape of the alpha channel. Is this the right way to go about this?

    Read the article

  • VB.net Saving an MetaFile / EMF as a bitmap ( .tiff)

    - by pehaada
    Currently I have a third party control that generates a Metafile. I can save the .wmf file to disk with out issue. The problem is how do I render the Metafile as a Tiff file. Currently I have the following code to get my metafile and save it. Dim mf As Metafile = page.GetImage(TXTextControl.Page.PageContent.All) Dim enhMetafileHandle As IntPtr = mf.GetHenhmetafile() Dim h As IntPtr Dim bufferSize As UInteger = GetEnhMetaFileBits(enhMetafileHandle, 0, h) Dim buffer(CInt(bufferSize)) As Byte GetEnhMetaFileBits(enhMetafileHandle, bufferSize, buffer) Dim msMetafileStream As New MemoryStream msMetafileStream.Write(buffer, 0, CInt(bufferSize)) Dim baMetafileData() As Byte baMetafileData = msMetafileStream.ToArray Dim g As Graphics = Graphics.FromImage(mf) mf.Dispose() File.WriteAllBytes("c:\a.wmf", baMetafileData) end sub _ Public Shared Function GetEnhMetaFileBits( ByVal hEMF As System.IntPtr, ByVal nSize As UInteger, ByVal lpData As IntPtr) As UInteger End Function <System.Runtime.InteropServices.DllImportAttribute("gdi32.dll", EntryPoint:="GetEnhMetaFileBits")> _ Public Shared Function GetEnhMetaFileBits(<System.Runtime.InteropServices.InAttribute()> ByVal hEMF As System.IntPtr, ByVal nSize As UInteger, ByVal lpData() As Byte) As UInteger End Function I've tried all sort of IMAGE and Graphic calls and just can't save the meta file as a .tiff. Any suggestions would be great. I even tried to create a new bitmap and draw the metafile onto it. I always end up with a GDI exception being thrown.

    Read the article

  • SQL (mySQL) update some value in all records processed by a select

    - by jdmuys
    I am using mySQL from their C API, but that shouldn't be relevant. My code must process records from a table that match some criteria, and then update the said records to flag them as processed. The lines in the table are modified/inserted/deleted by another process I don't control. I am afraid in the following, the UPDATE might flag some records erroneously since the set of records matching might have changed between step 1 and step 3. SELECT * FROM myTable WHERE <CONDITION>; # step 1 <iterate over the selected set of lines. This may take some time.> # step 2 UPDATE myTable SET processed=1 WHERE <CONDITION> # step 3 What's the smart way to ensure that the UPDATE updates all the lines processed, and only them? A transaction doesn't seem to fit the bill as it doesn't provide isolation of that sort: a recently modified record not in the originally selected set might still be targeted by the UPDATE statement. For the same reason, SELECT ... FOR UPDATE doesn't seem to help, though it sounds promising :-) The only way I can see is to use a temporary table to memorize the set of rows to be processed, doing something like: CREATE TEMPORARY TABLE workOrder (jobId INT(11)); INSERT INTO workOrder SELECT myID as jobId FROM myTable WHERE <CONDITION>; SELECT * FROM myTable WHERE myID IN (SELECT * FROM workOrder); <iterate over the selected set of lines. This may take some time.> UPDATE myTable SET processed=1 WHERE myID IN (SELECT * FROM workOrder); DROP TABLE workOrder; But this seems wasteful and not very efficient. Is there anything smarter? Many thanks from a SQL newbie.

    Read the article

  • Replace image in word doc using OpenXML

    - by fearofawhackplanet
    Following on from my last question here OpenXML looks like it probably does exactly what I want, but the documentation is terrible. An hour of googling hasn't got me any closer to figuring out what I need to do. I have a word document. I want to add an image to that word document (using word) in such a way that I can then open the document in OpenXML and replace that image. Should be simple enough, yes? I'm assuming I should be able to give my image 'placeholder' an id of some sort and then use GetPartById to locate the image and replace it. Would this be the correct method? What is this Id? How do you add it using Word? Every example I can find which does anything remotely similar starts by building the whole word document from scratch in ML, which really isn't a lot of use. EDIT: it occured to me that it would be easier to just replace the image in the media folder with the new image, but again can't find any indication of how to do this.

    Read the article

  • How can I load file into web app through certain periods?

    - by Elena
    Hi all! I have next task: I need to load the same file into my web app several times, for example - twice a day! Suppose in that file I have information, that changes, and I need to load this info into my app to change the statistics for example. How can I load file several times (twice an hour, or twice a day)? What should I use? Is any algorithm to do that? I am not allowed to use external libraries like Quartz Scheduler. So I need to do it with Thread and/or Timer. Can anybody give me some example or algorithm how to do it. Where can I create the entry point to my Thread, can I do it in managed bean or I need some sort of filter/listener/servlet. I works with jsf and richFaces. Maybe in this technologies there are some algorithms to solve my problem. Any ideas? Thanks very much for help!

    Read the article

  • PCA extended face recognition

    - by cMinor
    The state of the art says that we can use PCA to perform face recognition. like this, this or this I am working with a project that involves training a classifier to detect a person who is wearing glasess or hats or even a mustache. The purpose of doing this is to detect when a person that has robbed a bank, store, or have commeted some sort of crime(s) (we have their image in a database), enters a certain place ( historically we know these guys have robbed, so we should take care to avoid problems). We came first to have a distributed database with all images of criminals, then I thought to have a layer of them clasifying these criminals using accesories like hats, mustache or anything that hides their face etc... Then, to apply that knowledge to detect when a particular or a suspect person enters a comercial place. ( In practice when someone is going to rob not all the times they are using an accesorie...) What do you think about this idea of doing PCA to first detect principal components of the face and then the components of an accesory. I was thinking that maybe a probabilistic approach is better so we can compute the probability the criminal is the person that entered a place and call the respective authorities.

    Read the article

  • Making that move from junior > mid level

    - by dotnetdev
    Hi, Before I start, I know there is another thread about this very issue (http://stackoverflow.com/questions/2352874/moving-from-junior-developer-to-mid-level). I am in this very same situation, but of course every person and the company/employment-history is not the same. In my current company, I have not done one piece of coding from start to finish with the oversight of my manager and a Project Manager to manage the work/deadlines etc. I am basically an odd-jobs type of guy. The coding I do is on the side to whatever boring spreadsheet/word document I have to write. Very illogical that you're a coder and you're doing it in secret. In another job I had for 3 months (Was made redundant), it required 1 years experience, perhaps because of the fact I was the sole developer. It wasn't too hard, but then I was solely responsible and I learnt a lot from that. I had 2 other 3 months jobs (contracts), so I have been working for 1 year 9 months. I know found a job which I'm in the last stage for, which needs 3 years .NET experience and 2 years Sharepoint. How can I know if I am ready for this job? My current job has been going on for 1 year, but it doesn't mean squat apart from explaining how I have spent my time. It does not tell me what level I am at (apart from the huge skills gap I have opened up against my peers because I practise at home). So 1 year of doing nothing at work, but 1 year of doing loads at home. In fact, I take 1 week off and do more at home then in the company since I started. How can I know if I am ready for such a job? I am generally very confident given all I've achieved in coding, but I have no idea what a job with this sort of experience entails (what day-to-day-problems I would be facing). Is there any advice on how to handle this transition? Thanks

    Read the article

  • Autodetect Presence of CSV Headers in a File

    - by banzaimonkey
    Short question: How do I automatically detect whether a CSV file has headers in the first row? Details: I've written a small CSV parsing engine that places the data into an object that I can access as (approximately) an in-memory database. The original code was written to parse third-party CSV with a predictable format, but I'd like to be able to use this code more generally. I'm trying to figure out a reliable way to automatically detect the presence of CSV headers, so the script can decide whether to use the first row of the CSV file as keys / column names or start parsing data immediately. Since all I need is a boolean test, I could easily specify an argument after inspecting the CSV file myself, but I'd rather not have to (go go automation). I imagine I'd have to parse the first 3 to ? rows of the CSV file and look for a pattern of some sort to compare against the headers. I'm having nightmares of three particularly bad cases in which: The headers include numeric data for some reason The first few rows (or large portions of the CSV) are null There headers and data look too similar to tell them apart If I can get a "best guess" and have the parser fail with an error or spit out a warning if it can't decide, that's OK. If this is something that's going to be tremendously expensive in terms of time or computation (and take more time than it's supposed to save me) I'll happily scrap the idea and go back to working on "important things". I'm working with PHP, but this strikes me as more of an algorithmic / computational question than something that's implementation-specific. If there's a simple algorithm I can use, great. If you can point me to some relevant theory / discussion, that'd be great, too. If there's a giant library that does natural language processing or 300 different kinds of parsing, I'm not interested.

    Read the article

  • General N-Tier Architecture Question

    - by whatispunk
    In an N-Tier app you're supposed to have a business logic layer and a data access layer. Is it bad to simply have two assemblies: BusinessLogicLayer.dll and DataAccessLayer.dll to handle all this logic? How do you actually represent these layers. It seems silly, the way I've seen it, to have a BusinessLogic class library containing classes like: CustomerBusinessLogic.cs, OrderBusinessLogic.cs, etc. each calling their appropriately named cousin in the DataAccessLayer class library, i.e. CustomerDataAccess.cs, OrderDataAccess.cs. I want to create a web app using MVP and it doesn't seem so cut and dry as this. There are lots of opinions about where the business logic is supposed to be put in MVP and I'm not sure I've found a really great answer yet. I want this project to be easily testable, and I am trying to adhere to TDD methodologies as best I can. I intend to use MSTest and Rhino Mocks for testing. I was thinking of something like the following for my architecture: I'd use LINQ-To-SQL to talk to the database. WCF services to define data contract interfaces for the business logic layer. Then use MVP with ASP.NET Forms for the UI/BLL. Now, this isn't the start of this project, most of the LINQ stuff is already done, so its stuck. The WCF service would replace the existing DataAccessLayer assembly and the UI/BLL would replace the BusinessLogicLayer assembly etc. This sort of makes sense in my head, but its getting really late. Anyone that's traveled down this path have any guidance? Good links? Warnings? Thanks!

    Read the article

  • IIS: How to get the Metabase path?

    - by Ian Boyd
    i'm trying to get the list of mime types known to an IIS server (which you can see was asked and and answered by me 2 years ago). The copy-pasted answer involves: GetObject("IIS://LocalHost/MimeMap") msdn GetObject("IIS://localhost/mimemap") KB246068 GetObject("IIS://localhost/MimeMap") Scott Hanselman's Blog new DirectoryEntry("IIS://Localhost/MimeMap")) Stack Overflow new DirectoryEntry("IIS://Localhost/MimeMap")) Stack Overflow New DirectoryServices.DirectoryEntry("IIS://localhost/MimeMap") Velocity Reviews You get the idea. Everyone agrees that you use a magical path iis://localhost/mimemap. And this works great, except for the times when it doesn't. The only clue i can find as to why it fails, is from an IIS MVP, Chris Crowe's, blog: string ServerName = "LocalHost"; string MetabasePath = "IIS://" + ServerName + "/MimeMap"; // Note: This could also be something like // string MetabasePath = "IIS://" + ServerName + "/w3svc/1/root"; DirectoryEntry MimeMap = new DirectoryEntry(MetabasePath); There are two clues here: He calls iis://localhost/mimemap the Metabase Path. Which sounds to me like it is some sort of "path" to a "metabase". He says that the path to the metabase could be something else; and he gives an example of what it could be like. Right now i, and the entire planet, are hardcoding the "MetabasePath" as iis://localhost/MimeMap What should it really be? What should the code be doing to construct a valid MetabasePath? Note: i'm not getting an access denied error, the error is the same when you have an invalid MetabasePath, e.g. iis://localhost/SoTiredOfThis

    Read the article

  • Asynchronous Image loading in certain custom UITableViewCells

    - by Dan
    So, I'm sure that this issue has been brought up before, but I haven't quite seen a solution for my specific problem. What I'm doing is loading a group of custom UITableViewCell's that are drawn using Loren Brichter's solution, each cell has some content, an icon (representing the user) that is asynchronously loaded into it, and a few other things. Eventually, I'm hoping to add support for images. Not every cell has an image, so in the creation of the cell, it determines if an image is required to be loaded and drawn into the cell. My problem is that I don't know how I can do that while keeping the image loaded asynchronously - with not every cell having the need to load an image (like needed with the icon) I'm not sure how to keep it in order when an cell is thrown off screen and re-rendered when the user scrolls over it. Each cell draws it content from an NSArray containing a custom object called FeedItem. All I'm really looking for is some sort of solution or idea to help me because right now I am at a loss. Thanks guys, I will appreciate the help.

    Read the article

  • Connecting repeatable and non-repeatble backgrounds without a seam

    - by Stiggler
    I have a 700x300 background repeating seamlessly inside of the main content-div. Now I'd like to attach a div at the bottom of the content-div, containing a different background image that isn't repeatable, connecting seamlessly with the repeatable background above it. Essentially, the non-repeatable image will look like the end piece of the repeatable image. Due to the nature of the pattern, unless the full 300px height of the background image is visible in the last repeat of the content-div's backround, the background in the div below won't seamlessly connect. Basically, I need the content div's height to be a multiple of 300px under all circumstances. What's a good approach to this sort of problem? I've tried resizing the content-div on loading the page, but this only works as long as the content div doesn't contain any resizing, dynamic content, which is not my case: function adjustContentHeight() { // Setting content div's height to nearest upper multiple of column backgrounds height, // forcing it not to be cut-off when repeated. var contentBgHeight = 300; var contentHeight = $("#content").height(); var adjustedHeight = Math.ceil(contentHeight / contentBgHeight); $("#content").height(adjustedHeight * contentBgHeight); } $(document).ready(adjustContentHeight); What I'm looking for there is a way to respond to a div resizing event, but there doesn't seem to be such a thing. Also, please assume I have no access to the JS controlling the resizing of content in the content-div, though this is potentially a way of solving the problem. Another potential solution I was thinking off was to offset the background image in the bottom div by a certain amount depending on the height of the content-div. Again, the missing piece seems to be the ability to respond to a resize event.

    Read the article

  • (C#) Get index of current foreach iteration

    - by Graphain
    Hi, Is there some rare language construct I haven't encountered (like the few I've learned recently, some on Stack Overflow) in C# to get a value representing the current iteration of a foreach loop? For instance, I currently do something like this depending on the circumstances: int i=0; foreach (Object o in collection) { ... i++; } Answers: @bryansh: I am setting the class of an element in a view page based on the position in the list. I guess I could add a method that gets the CSSClass for the Objects I am iterating through but that almost feels like a violation of the interface of that class. @Brad Wilson: I really like that - I've often thought about something like that when using the ternary operator but never really given it enough thought. As a bit of food for thought it would be nice if you could do something similar to somehow add (generically to all IEnumerable objects) a handle on the enumerator to increment the value that an extension method returns i.e. inject a method into the IEnumerable interface that returns an iterationindex. Of course this would be blatant hacks and witchcraft... Cool though... @crucible: Awesome I totally forgot to check the LINQ methods. Hmm appears to be a terrible library implementation though. I don't see why people are downvoting you though. You'd expect the method to either use some sort of HashTable of indices or even another SQL call, not an O(N) iteration... (@Jonathan Holland yes you are right, expecting SQL was wrong) @Joseph Daigle: The difficulty is that I assume the foreach casting/retrieval is optimised more than my own code would be. @Jonathan Holland: Ah, cheers for explaining how it works and ha at firing someone for using it.

    Read the article

  • Sql Server Compact - Schema Management

    - by Richard B
    I've been searching for some time for a good solution to implement the idea of managing schema on a Sql Server Compact 3.5 db. I know of several ways of managing schema on Sql Express/std/enterprise, but Compact Edition doesn't support the necessary tools required to use the same methodology. Any suggestions/tips? I should expand this to say that it is for 100+ clients with wrapperware software. As the system changes, I need to publish update scripts alongside the new binaries to the client. I was looking for a decent method by which to publish this without having to just hand the client a script file and say "Run this in SSMSE". Most clients are not capable of doing such a beast. A buddy of mine disclosed a partial script on how to handle the SQL Server piece of my task, but never worked on Compact Edition... It looks like I'll be on my own for this. What I think that I've decided to do, and it's going to need a "geek week" to accomplish, is that I'm going to write some sort of tool much like how WiX and nAnt works, so that I can just write an overzealous Xml document to handle the work. If I think that it is worthwhile, I'll publish it on CodePlex and/or CodeProject because I've used both sites a bit to gain better understanding of concepts for jobs I've done in the past, and I think it is probably worthwhile to give back a little.

    Read the article

  • Best Practice for Uploading Many (2000+) Images to A Server

    - by bob
    Hello, I have a general question about this. When you have a gallery, sometimes people need to upload 1000's of images at once. Most likely, it would be done through a .zip file. What is the best way to go about uploading this sort of thing to a server. Many times, server have timeouts etc. that need to be accounted for. I am wondering what kinds of things should I be looking out for and what is the best way to handle a large amount of images being uploaded. I'm guessing that you would allow a user to upload a zip file (assuming the timeout does not effect you), and this zip file is uploaded to a specific directory, lets assume in this case a directory is created for each user in the system. You would then unzip the directory on the server and scan the user's folder for any directories containing .jpg or .png or .gif files (etc.) and then import them into a table accordingly. I'm guessing labeled by folder name. What kind of server side troubles could I run into? I'm aware that there may be many issues. Even general ideas would be could so I can then research further. Thanks! Also, I would be programming in Ruby on Rails but I think this question applies accross any language.

    Read the article

  • Paging with Find using Active Record

    - by Brian Rizzo
    I can't seem to find an answer to this question or a good example of how to accomplish what I am trying to do. I'm sure it's been posted or explained somewhere, but I am having trouble finding the exact solution I need. I am using ActiveRecord in Subsonic 3.0.0.3. When I do something like recordset = VehicleModel.Find(x => x.Model.StartsWith(SearchText)); I get back an IList of VehicleModel objects (or more simply a recordset), this is fine until I return too many records. I also cannot order the returned set of records (my grid will do this fine, but i'm sure it will be too slow if i have too many records). Being that Find is returning an IList there isn't much that I can run directly against this (again I may be overlooking something simple so please don't kill me). My question is can someone explain how to find data like i am above, sort it and get a page of data where a page is of size n? Am I going about this wrong? Am I even close to being on the right track?

    Read the article

  • ModalPopupExtender + ASP.NET AJAX: Can't page grid

    - by Alex
    I'm trying to page and sort my datagrid wich is inside a modalpopupextender but I can't page it in any way, already tried with , put the updatepanel inside, outside, in the middle (loL) and it does NOT work. modal popup does not get closed but the grid just dissapear. Code: Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Not Page.IsPostBack Then BindData() End If End Sub Private Sub btnSearch_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnSearch.Click SqlServerDS.SelectCommand = "SELECT * FROM emp WHERE name LIKE '%" & txtSearchName.Text & "%'" BindData() End Sub Private Sub BindData() grdSearch.DataSource = SqlServerDS grdSearch.DataBind() End Sub Private Sub grdBuscaPaciente_PageIndexChanging(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewPageEventArgs) Handles grdSearch.PageIndexChanging grdSearch.PageIndex = e.NewPageIndex BindData() End Sub Inside the Designer, this is the code h: <modalpopupextender> </modalpopupextender> <panel> <updatepanel> <gridview> </gridview> </updatepanel> </panel>

    Read the article

  • What is the correct date format for a Date column in YUI DataTable ?

    - by giulio
    I have produced a data table. All the columns are sortable. It has a date in one column which I formatted dd/mm/yyyy hh:mm:ss . This is different from the default format as defined in the doco, but I should be able to define my own format for non-american formats. (See below) The DataTable class provides a set of built-in static functions to format certain well-known types of data. In your Column definition, if you set a Column's formatter to YAHOO.widget.DataTable.formatDate, that function will render data of type Date with the default syntax of "MM/DD/YYYY". If you would like to bypass a built-in formatter in favor of your own, you can point a Column's formatter to a custom function that you define. The table is generated from HTML Markup, so the data is held within "" tags. This gives me some more clues about compatible string dates for javascript: In general, the RecordSet expects to hold data in native JavaScript types. For instance, a date is expected to be a JavaScript Date instance, not a string like "4/26/2005" in order to sort properly. Converting data types as data comes into your RecordSet is enabled through the parser property in the fields array of your DataSource's responseSchema I suspect that the I'm missing something in the date format. So what is an acceptable string date for javascript, that Yui dataTable will recognise, given that I want format it as "dd/mm/yyyy hh:mm:ss" ?

    Read the article

  • How to use XPath to filter elements by TextContent? get parent by axis?

    - by Michael Mao
    Hi all: I've found a similar question on SO, however, that seems not exactly what I wanna achieve: Say, this is a sample XML file: <root> <item> <id isInStock="true">10001</id> <category>Loose Balloon</category> </item> <item> <id isInStock="true">10001</id> <category>Bouquet Balloon</category> </item> <item> <id isInStock="true">10001</id> <category>Loose Balloon</category> </item> </root> If I wanna get a "filtered" subset of the item elements from this XML, how could I use an XPath expression to directly address that? XPathExpression expr = xpath.compile("/root/item/category/text()"); I now know this would evaluate to be the collection of all the TextContent from the categories, however, that means I have to use a collection to store the values, then iterate, then go back to grab other related info such as the item id again. Another question is : how could I refer to the parent node properly? Say, this xpath expression would get me the collection of all the id nodes, right? But what I want is the collection of item nodes: XPathExpression expr = xpath.compile("/root/item/id[@isInStock='true']"); I know I should use the "parent" axis to refer to that, but I just cannot make it right... Is there a better way of doing this sort of thing? Learning the w3cschools tutorials now... Sorry I am new to XPath in Java, and thanks a lot in advance.

    Read the article

  • crawl websites out of java web application without using bin/nutch

    - by Marcel
    hi :) i am trying to using nutch (1.1) without bin/nutch from my (java) mojarra 2.0.2 webapp... i am searching at google for examples, but there are no examples how i can realize this :/ ... i get an exception and the job fails :/ (i think of cause something with hadoop)... here is my code: public void run() throws Exception { final String[] args = new String[] { String.format("%s%s%s%s", JSFUtils.getWebAppRoot(), "nutch", File.separator, DIRECTORY_URLS), "-dir", String.format("%s%s%s%s", JSFUtils.getWebAppRoot(), "nutch", File.separator, DIRECTORY_CRAWL), "-threads", this.preferences.get("threads"), "-depth", this.preferences.get("depth"), "-topN", this.preferences.get("topN"), "-solr", this.preferences.get("solr") }; Crawl.main(args); } and a part of the logging: 10/05/17 10:42:54 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= 10/05/17 10:42:54 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 10/05/17 10:42:54 INFO mapred.FileInputFormat: Total input paths to process : 1 10/05/17 10:42:54 INFO mapred.JobClient: Running job: job_local_0001 10/05/17 10:42:54 INFO mapred.FileInputFormat: Total input paths to process : 1 10/05/17 10:42:55 INFO mapred.MapTask: numReduceTasks: 1 10/05/17 10:42:55 INFO mapred.MapTask: io.sort.mb = 100 java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1232) at org.apache.nutch.crawl.Injector.inject(Injector.java:211) at org.apache.nutch.crawl.Crawl.main(Crawl.java:124) at lan.localhost.process.NutchCrawling.run(NutchCrawling.java:108) at lan.localhost.main.Index.indexing(Index.java:71) at lan.localhost.bean.FeedingBean.actionStart(FeedingBean.java:25) .... can someone help me or tell me how i can crawling from a java application? i have increased the Xms to 256m and Xmx to 768m, but nothing changed... best regards marcel

    Read the article

  • Error building Visual Studio 2010 Silverlight 4 projects on Windows 7 with XP Mode

    - by Kevin Dente
    I installed Visual Studio 2010 Beta 2 in an XP Mode VM on Windows 7. Then I created a trivial Silverlight 4 (beta) project and tried to build it. I get the following error: Error 1 The "ValidateXaml" task failed unexpectedly. System.IO.FileLoadException: Could not load file or assembly 'file://\tsclient\d\Users\me\Documents\Visual Studio 2010\Projects\SilverlightApplication2\SilverlightApplication2\obj\Debug\SilverlightApplication2.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515) File name: 'file://\tsclient\d\Users\me\Documents\Visual Studio 2010\Projects\SilverlightApplication2\SilverlightApplication2\obj\Debug\SilverlightApplication2.dll' --- System.NotSupportedException: An attempt was made to load an assembly from a network location which would have caused the assembly to be sandboxed in previous versions of the .NET Framework. This release of the .NET Framework does not enable CAS policy by default, so this load may be dangerous. If this load is not intended to sandbox the assembly, please enable the loadFromRemoteSources switch. See http://go.microsoft.com/fwlink/?LinkId=155569 for more information. at System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadFrom(String assemblyFile, Evidence securityEvidence, Byte[] hashValue, AssemblyHashAlgorithm hashAlgorithm, Boolean forIntrospection, Boolean suppressSecurityChecks, StackCrawlMark& stackMark) at System.Reflection.Assembly.LoadFrom(String assemblyFile) at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute(ITask task) at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute(ITask task) at Microsoft.Silverlight.Build.Tasks.ValidateXaml.Execute() at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult) I believe this is related to the fact that XP Mode redirects the My Documents folder to the host, turning it into a network share location, and some sort of CAS / security policy is being triggered. Anyone know how to fix it?

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >