Search Results

Search found 97400 results on 3896 pages for 'application data'.

Page 348/3896 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • Should I move big data blobs in JSON or in separate binary connection?

    - by Amagrammer
    QUESTION: Is it better to send large data blobs in JSON for simplicity, or send them as binary data over a separate connection? If the former, can you offer tips on how to optimize the JSON to minimize size? If the latter, is it worth it to logically connect the JSON data to the binary data using an identifier that appears in both, e.g., as "data" : "< unique identifier " in the JSON and with the first bytes of the data blob being < unique identifier ? CONTEXT: My iPhone application needs to receive JSON data over the 3G network. This means that I need to think seriously about efficiency of data transfer, as well as the load on the CPU. Most of the data transfers will be relatively small packets of text data for which JSON is a natural format and for which there is no point in worrying much about efficiency. However, some of the most critical transfers will be big blobs of binary data -- definitely at least 100 kilobytes of data, and possibly closer to 1 megabyte as customers accumulate a longer history with the product. (Note: I will be caching what I can on the iPhone itself, but the data still has to be transferred at least once.) It is NOT streaming data. I will probably use a third-party JSON SDK -- the one I am using during development is here. Thanks

    Read the article

  • Moving a ASP.NET application to the cloud

    - by user102533
    I am new to cloud computing, so please bear with me here. I have an existing ASP.NET application with SQL Server 2008 hosted on a Virtual Private Server. Here's what it briefly does: The front end accepts user's requests and adds them to a DB table A Windows Service running in the background picks up the request, processes it and sets a flag. The Windows Services also creates a file for the user to download. User downloads file I'd like to move this web application with the service to the cloud. The architecture I envision is that I'll have 1 Web server in which I will install the front end and the windows service. I'll also have a cloud files server for file storage. The windows service should somehow create a file and transfer it to the cloud file server (I assume this is possible?) My questions: Does the architecture look like I am going in the right direction? I know Amazon has been providing cloud services for a long time. If I want to do minimal changes to my application, should I go with Amazon, Rackspace, Azure or some other provider? I understand that I would not only pay for file storage and web server but also for the bandwidth of users downloading the file and the windows servic uploading the file to the cloud server. Can I assume these costs are negligible? Should I go with VPS + Cloud Files combination to begin with? Any other thoughts/suggestions?

    Read the article

  • VB.Net Application Settings / ClickOnce

    - by B Z
    VS 2008 / VB.Net / WinForms I have an application setting (Settings.settings) for a project and I am using Click Once deployment. I used the VS Editor to create the setting and I can see the setting in the app.config file <applicationSettings> <MyApp.Win.My.MySettings> <setting name="MySetting" serializeAs="String"> <value>False</value> </setting> </MyApp.ArtTracker.Win.My.MySettings> </applicationSettings> I would like to update this setting after the application is compiled. The setting is for testing purposes only. If I change the xxx.config.deploy and I reinstall the app with click once. The new setting value doesn't change (seems to be cached somewhere). Even if I change in my local pc the setting seems to be cached somewhere. If I go in VS it asks me to Re-Sync the settings. But I need to do this after the application is compiled. Thanks for any help

    Read the article

  • Sweave/R - Automatically generating an appendix that contains all the model summaries/plots/data pro

    - by John Horton
    I like the idea of making research available at multiple levels of detail i.e., abstract for the casually curious, full text for the more interested, and finally the data and code for those working in the same area/trying to reproduce your results. In between the actual text and the data/code level, I'd like to insert another layer. Namely, I'd like to create a kind of automatically generated appendix that contains the full regression output, diagnostic plots, exploratory graphs data profiles etc. from the analysis, regardless of whether those plots/regressions etc. made it into the final paper. One idea I had was to write a script that would examine the .Rnw file and automatically: Profile all data sets that are loaded (sort of like the Hmisc(?) package) Summarize all regressions - i.e., run summary(model) for all models Present all plots (regardless of whether they made it in the final version) The idea is to make this kind of a low-effort, push-button sort of thing as opposed to a formal appendix written like the rest of a paper. What I'm looking for is some ideas on how to do this in R in a relatively simple way. My hunch is that there is some way of going through the namespace, figuring out what something is and then dumping into a PDF. Thoughts? Does something like this already exist?

    Read the article

  • Sending jQuery.ajax data simultaneous to a form submit

    - by dscher
    I have a bit of a conundrum. I have a form which has numerous fields. There is one field for links where you enter a link, click an add button, and the link(using jQuery) gets added to a link_array. I want this array to be sent via the jQuery.ajax method when the form is submitted. If I send the link_array using $.ajax like this: $.ajax({ type: "POST", url: "add_stock", dataType: "json", data: { "links": link_array } }); when the add link button is selected the data goes no problem to the correct place and gets put in the db correctly. If I bind the above function to the submit form button using $(#stock_form).submit(..... then the rest of the form data is sent but not the link_array. I can obviously pass the link array back into a hidden field in HTML but then I'd have to unpack the array into comma separate values and break the comma-separated string apart in PHP. It just seems 100X easier to unpack the Javascript array in PHP without an other fuss. So, how is it that you can send an array from javascript using $.ajax concurrent to the rest of the $_POST data in HTML? Please note that I'm using Kohana 3.0 framework but really that shouldn't make a difference, what I want to do is add this js array to the $_POST array that is already going. Thanks!

    Read the article

  • CLR Stored Procedures

    - by Paul Hatcherian
    In an ASP.NET application, I have a small number of fairly complex, frequently used operations to execute against a database. In these operations, one or more of several tables needs updates or inserts based a logical evaluation of both input parameters and values of certain tables. I've maintained a separation of logic and data access, so the operation currently looks like this: Request received from client Business layer invokes data layer to retrieve data from database Business layer processes result and determines which operation to execute Business layer invokes appropriate data operation Response sent to client As you can see, the client is kept waiting while two separate requests are made to the database. In searching for a solution to this, I've found CLR Stored Procedures, but I'm not sure if I have the right idea about what they are useful for. I have written a replacement for the code above which especially places steps 2-4 in a CLR SP. My understanding is that the SP will be executed locally by SQL Server and result in only one call being made to the server. My initial benchmark tests show this is actually orders of magnitude slower than my original code, but I attribute that recompilation of the code I have not worked out yet and/or some flaw in my environment. My question is basically, is this the intended use of CLR SPs or am I missing something? I realize this is a bit of a compromise structurally, so if there's a better way to do it I'd love to hear it.

    Read the article

  • How to group data changes by operation with MySQL triggers

    - by Jan-Henk
    I am using triggers in MySQL to log changes to the data. These changes are recorded on a row level. I can now insert an entry in my log table for each row that is changed. However, I also need to record the operation to which the changes belong. For example, a delete operation like "DELETE * FROM table WHERE type=x" can delete multiple rows. With the trigger I can insert an entry for each deleted row into the log table, but I would like to also provide a unique identifier for the operation as a whole, so that the log table looks something like: log_id operation_id tablename fieldname oldvalue newvalue 1 1 table id 1 null 2 1 table type a null 3 1 table id 2 null 4 1 table type a null 5 2 table id 3 null 6 2 table type b null 7 2 table id 4 null 8 2 table type b null Is there a way in MySQL to identify the higher level operation to which the row changes belong? Or is this only possible by means of application level code? In the future it would also be nice to be able to record the transaction to which an operation belongs. Another question is if it is possible to capture the actual SQL query, besides using the query log. I don't think so myself, but maybe I am missing something. It is of course possible to capture these at the application level, but the goal is to keep intrusions to the application level code as minimal as possible. When this is not possible with MySQL, how is this with other database systems? For the current project it is not an option to use something other than MySQL, but it would be nice to know for future projects.

    Read the article

  • Map element position in data file to class property

    - by Augusto
    I need to read/write files, following a format provided by a third party specification. The specification itself is pretty simple: it says the position and the size of the data that will be saved in the file. For example: Position Size Description -------------------------------------------------- 0001 10 Device serial number 0011 02 Hour 0013 02 Minute 0015 02 Second 0017 02 Day 0019 02 Month 0021 02 Year The list is very long, it has about 400 elements. But lots of them can be combined. For example, hour, minute, second, day, month and year can be combined in a single DateTime object. I've splitted the elements into about 4 categories, and created separeted classes for holding the data. So, instead of a big structure representing the data, I have some smaller classes. I've also created different classes for reading and writing the data. The problem is: how to map the positions in the file to the objects properties, so that I don't need to repeat the values in the reading/writing class? I could use some custom attributes and retrieve them via reflection. But since the code will be running on devices with small memory and processor, it would be nice to find another way. My current read code looks like this: public void Read() { DataFile dataFile = new DataFile(); // the arguments are: position, size dataFile.SerialNumber = ReadLong(1, 10); //... } Any ideas on this one? Thanks!

    Read the article

  • Properly using Log4r in Ruby Application

    - by Spencer
    I must really be missing something obvious, but I'm having trouble with general use of Log4r in my Ruby application. I am able to log without issue, but the overhead seems clunky the way I have it setup. I'm basically passing the full path to a filename to log in each class in my application. The ruby script that is called pulls the log file from one of the arguments in ARGV which is then passed around and set in each class that I call in ruby. In each class I use the patternFormatter to insert the class/file name into the log statement. Is there a better way to make this work? It feels like no matter what I think of will require something to be passed to each class in my ruby application. I could set the log file in a yaml configuration file instead, but then I would be passing around the configuration file to each class as well. Any advice? If this doesn't make sense I could try and post some more specific code samples to further explain what I mean. Thanks!

    Read the article

  • Automation Error when exporting Excel data to SQL Server

    - by brohjoe
    I'm getting an Automation error upon running VBA code in Excel 2007. I'm attempting to connect to a remote SQL Server DB and load data to from Excel to SQL Server. The error I get is, "Run-time error '-2147217843(80040e4d)': Automation error". I checked out the MSDN site and it suggested that this may be due to a bug associated with the sqloledb provider and one way to mitigate this is to use ODBC. Well I changed the connection string to reflect ODBC provider and associated parameters and I'm still getting the same error. Here is the code with ODBC as the provider: Dim cnt As ADODB.Connection Dim rst As ADODB.Recordset Dim stSQL As String Dim wbBook As Workbook Dim wsSheet As Worksheet Dim rnStart As Range Public Sub loadData() 'This was set up using Microsoft ActiveX Data Components version 6.0. 'Create ADODB connection object, open connection and construct the connection string object. Set cnt = New ADODB.Connection cnt.ConnectionString = "Driver={SQL Server}; Server=onlineSQLServer2010.foo.com; Database=fooDB;Uid=logonalready;Pwd='helpmeOB1';" cnt.Open On Error GoTo ErrorHandler 'Open Excel and run query to export data to SQL Server. strSQL = "SELECT * INTO SalesOrders FROM OPENDATASOURCE('Microsoft.ACE.OLEDB.12.0'," & _ "'Data Source=C:\Database.xlsx; Extended Properties=Excel 12.0')...[SalesOrders$]" cnt.Execute (strSQL) 'Error handling. ErrorExit: 'Reclaim memory from the connection objects Set rst = Nothing Set cnt = Nothing Exit Sub ErrorHandler: MsgBox Err.Description, vbCritical Resume ErrorExit 'clean up and reclaim memory resources. If CBool(cnt.State And adStateOpen) Then Set rst = Nothing Set cnt = Nothing cnt.Close End If End Sub

    Read the article

  • Struggling with a data modeling problem

    - by rpat
    I am struggling with a data model (I use MySQL for the database). I am uneasy about what I have come up with. If someone could suggest a better approach, or point me to some reference matter I would appreciate it. The data would have organizations of many types. I am trying to do a 3 level classification (Class, Category, Type). Say if I have 'Italian Restaurant', it will have the following classification Food Services Restaurants Italian However, an organization may belong to multiple groups. A restaurant may also serve Chinese and Italian. So it will fit into 2 classifications Food Services Restaurants Italian Food Services Restaurants Chinese The classification reference tables would be like the following: ORG_CLASS (RowId, ClassCode, ClassName) 1, FOOD, Food Services ORG_CATEGORY(RowId, ClassCode, CategoryCode, CategoryName) 1, FOOD, REST, Restaurants ORG_TYPE (RowId, ClassCode, CategoryCode, TypeCode, TypeName) 100, FOOD, REST, ITAL, Italian 101, FOOD, REST, CHIN, Chinese 102, FOOD, REST, SPAN, Spanish 103, FOOD, REST, MEXI, Mexican 104, FOOD, REST, FREN, French 105, FOOD, REST, MIDL, Middle Eastern The actual data tables would be like the following: I will allow an organization a max of 3 classifications. I will have 3 GroupIds each pointing to a row in ORG_TYPE. So I have my ORGANIZATION_TABLE ORGANIZATION_TABLE (OrgGroupId1, OrgGroupId2, OrgGroupId3, OrgName, OrgAddres) 100,103,NULL,MyRestaurant1, MyAddr1 100,102,NULL,MyRestaurant2, MyAddr2 100,104,105, MyRestaurant3, MyAddr3 During data add, a dialog could let the user choose the clssa, category, type and the corresponding GroupId could be populated with the rowid from the ORG_TYPE table. During Search, If all three classification are chosen, It will be more specific. For example, if Food Services Restaurants Italian is the criteria, the where clause would be 'where OrgGroupId1 = 100' If only 2 levels are chosen Food Services Restaurants I have to do 'where OrgGroupId1 in (100,101,102,103,104,105, .....)' - There could be a hundred in that list I will disallow class level search. That is I will force selection of a class and category The Ids would be integers. I am trying to see performance issues and other issues. Overall, would this work? or I need to throw this out and start from scratch.

    Read the article

  • How to access this data in PHP?

    - by George Edison
    Okay. Now I give up. I have been playing with this for hours. I have a variable name $data. The variable contains these contents: array ( 'headers' => array ( 'content-type' => 'multipart/alternative; boundary="_689e1a7d-7a0a-442a-bd6c-a1fb1dc2993e_"', ), 'ctype_parameters' => array ( 'boundary' => '_689e1a7d-7a0a-442a-bd6c-a1fb1dc2993e_', ), 'parts' => array ( 0 => stdClass::__set_state(array( 'headers' => array ( 'content-type' => 'text/plain; charset="iso-8859-1"', 'content-transfer-encoding' => 'quoted-printable', ), 'ctype_primary' => 'text', )), ), ) I removed some non-essential data. I want to access the headers value (on the second line above) - simple: $data->headers I want to access the headers value (on the fourteenth line after the stdClass:: stuff) - how? How can I possibly access the values within the stdClass::__set_state section? I tried var_export($data->parts); but all I get is NULL

    Read the article

  • Aggregating, restructuring hourly time series data in R

    - by Advait Godbole
    I have a year's worth of hourly data in a data frame in R: > str(df.MHwind_load) # compactly displays structure of data frame 'data.frame': 8760 obs. of 6 variables: $ Date : Factor w/ 365 levels "2010-04-01","2010-04-02",..: 1 1 1 1 1 1 1 1 1 1 ... $ Time..HRs. : int 1 2 3 4 5 6 7 8 9 10 ... $ Hour.of.Year : int 1 2 3 4 5 6 7 8 9 10 ... $ Wind.MW : int 375 492 483 476 486 512 421 396 456 453 ... $ MSEDCL.Demand: int 13293 13140 12806 12891 13113 13802 14186 14104 14117 14462 ... $ Net.Load : int 12918 12648 12323 12415 12627 13290 13765 13708 13661 14009 ... While preserving the hourly structure, I would like to know how to extract a particular month/group of months the first day/first week etc of each month all mondays, all tuesdays etc of the year I have tried using "cut" without result and after looking online think that "lubridate" might be able to do so but haven't found suitable examples. I'd greatly appreciate help on this issue.

    Read the article

  • Socket Read In Multi-Threaded Application Returns Zero Bytes or EINTR (104)

    - by user309670
    Hi. Am a c-coder for a while now - neither a newbie nor an expert. Now, I have a certain daemoned application in C on a PPC Linux. I use PHP's socket_connect as a client to connect to this service locally. The server uses epoll for multiplexing connections via a Unix socket. A user submitted string is parsed for certain characters/words using strstr() and if found, spawns 4 joinable threads to different websites simultaneously. I use socket, connect, write and read, to interact with the said webservers via TCP on their port 80 in each thread. All connections and writes seems successful. Reads to the webserver sockets fail however, with either (A) all 3 threads seem to hang, and only one thread returns -1 and errno is set to 104. The responding thread takes like 10 minutes - an eternity long:-(. *I read somewhere that the 104 (is EINTR?), which in the network context suggests that ...'the connection was reset by peer'; or (B) 0 bytes from 3 threads, and only 1 of the 4 threads actually returns some data. Isn't the socket read/write thread-safe? I use thread-safe (and reentrant) libc functions such as strtok_r, gethostbyname_r, etc. *I doubt that the said webhosts are actually resetting the connection, because when I run a single-threaded standalone (everything else equal) all things works perfectly right, but of course in series not parallel. There's a second problem too (oops), I can't write back to the client who connect to my epoll-ed Unix socket. My daemon application will hang and hog CPU 100% for ever. Yet nothing is written to the clients end. Am sure the client (a very typical PHP socket application) hasn't closed the connection whenever this is happening - no error(s) detected either. Any ideas? I cannot figure-out whatever is wrong even with Valgrind, GDB or much logging. Kindly help where you can.

    Read the article

  • SqlLite/Fluent NHibernate integration test harness initialization not repeatable after large data se

    - by Mark Rogers
    In one of my main data integration test harnesses I create and use Fluent NHibernate's SingleConnectionSessionSourceForSQLiteInMemoryTesting, to get a fresh session for each test. After each test, I close the connection, session, and session factory, and throw out the nested StructureMap container they came from. This works for almost any simple data integration test I can think, including ones that utilize Fluent NHib's PersistenceSpecification object. When I test the application's lengthy database bootstrapping process, which creates and saves thousands of domain objects, I start seeing issues. It's not that the setup and tear down fails, in fact, the test successfully bootstraps the in-memory database as the application would bootstrap the real database in the production environment. The problem occurs when the database bootstrapping occurs a second time on a new in-memory database, with a new session and session factory. The error is: NHibernate.StaleStateException : Unexpected row count: 0; expected: 1 The row count is indeed Unexpected, the row that the application under test is looking for should be in the session. You see, it's not that any data from the last integration test is sticking around, it's that for some reason the session just stops working mid-database-boostrap. And I've looked everywhere for a place I might be holding on to an old session and I can't find one. I've searched through the code for static singleton objects, but there are none anywhere near the code in question. I have a couple StructureMap InstanceScope singleton's but they are getting thrown out with each nested container that is lost after every test teardown. I've tried every possible variation on disposing and closing every object involved with each test teardown and it still fails on this lengthy database bootstrap. But non-bootstrap related database tests appear to work fine. I'm starting to run out of options and may have to surrender lengthy database integration tests in favor of WatiN-based acceptance tests. Can anyone give me any clue about how I can figure out why some of my SingleConnectionSessionSourceForSQLiteInMemoryTesting aren't repeatable? Any advice at all, about how to make an NHibernate SqlLite database integration test harness repeatable?

    Read the article

  • SQL Server Multi-statement UDF - way to store data temporarily required

    - by Kharlos Dominguez
    Hello, I have a relatively complex query, with several self joins, which works on a rather large table. For that query to perform faster, I thus need to only work with a subset of the data. Said subset of data can range between 12 000 and 120 000 rows depending on the parameters passed. More details can be found here: http://stackoverflow.com/questions/3054843/sql-server-cte-referred-in-self-joins-slow As you can see, I was using a CTE to return the data subset before, which caused some performance problems as SQL Server was re-running the Select statement in the CTE for every join instead of simply being run once and reusing its data set. The alternative, using temporary tables worked much faster (while testing the query in a separate window outside the UDF body). However, when I tried to implement this in a multi-statement UDF, I was harshly reminded by SQL Server that multi-statement UDFs do not support temporary tables for some reason... UDFs do allow table variables however, so I tried that, but the performance is absolutely horrible as it takes 1m40 for my query to complete whereas the the CTE version only took 40minutes. I believe the table variables is slow for reasons listed in this thread: http://stackoverflow.com/questions/1643687/table-variable-poor-performance-on-insert-in-sql-server-stored-procedure Temporary table version takes around 1 seconds, but I can't make it into a function due to the SQL Server restrictions, and I have to return a table back to the caller. Considering that CTE and table variables are both too slow, and that temporary tables are rejected in UDFs, What are my options in order for my UDF to perform quickly? Thanks a lot in advance.

    Read the article

  • Passing data to a non-static listBox, by calling function from another class

    - by Andrew A
    I have a simple forms program that I have been fighting with for a while now. I simply want to be able to call a method from a different class file (when a certain step is triggered in the code in that class file) in order to insert a string in the listBox. Here is my main method, pretty standard: class Program { [STAThread] static void Main(string[] args) { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new MainForm()); } } Here is the function which resides in my MainForm.cs file, which I can call just fine from that class file (via 'TextToBox(myString);'): public partial class MainForm : Form { ... // Function to output results to main Listbox window public void TextToBox(string aString) { // Place messages in Main Display list box window this.listBox1.Items.Insert(0, aString); } ... } But my problem is when I am in another class and I want to call 'TextToBox(myString);'. If I create another object reference of the MainForm, the code compiles fine but nothing will show up in the listBox. How do I do this? I cannot simply make TextToBox() static. I know I must create the object reference but I can't figure out how to reference the ORIGINAL MainForm, the one that was created in the Main method. Thanks in advance...

    Read the article

  • How to access application.xml file of an EAR deployed to IBM WebSphere 6.1

    - by Matt1776
    I am deploying an EAR file to the IBM WebSpehre server 6.1 - I want to be able to access the EAR application name which is stored in the deployment file under 'display-name'. Looking through stack overflow posts on related subjects, I've been able to gather that this is possible via the Java MBean API - or IBM's WAS API - Problem is I cannot find a place where these API lists are summarized, i.e. cannot figure out which one to begin looking at. I could hardcode the WAS install location and find the file by looking in the 'installedApps' directory, but this is not dynamic. Does anyone have any experience working with these APIs? Any other way to dynamically find the deployed EAR's display name? EDIT - I should add that the reason I would like this information is to dynamically load our properties files - that are named by the following convention "EARAppName.properties" - so you see there IS a reasonable 'rationale' behind desiring this information in my application EDIT 2 - I should also note that this app will always be deployed on a WAS - but in the case that it isnt, a generic non-proprietary solution would be preferred, but not necessary at this moment. EDIT 3 - What I want to accomplish: Is there a way to dynamically find the deployed EAR's display name from within the application code?

    Read the article

  • al32utf8 in oracle and SQL Server and DB2 pulling data

    - by Bob
    I have a non-utf8 oracle database running on 11.1.0.7. We need to support greek characters. So we have two options: use nvarchar, nclob fields for those fields that need greek (it is not all fields). We have tested this and gotten it to work with java coding. convert Oracle to AL32UTF8 database. I am not asking how to do this. I got this from the Oracle Site/Oracle Support. I know what is involved, lossy data, etc, increasing the size of the database. My question is we have users to our system that connect to our database with database links but work on SQL Server and IBM DB2 databases. I do not have access to those databases and I do not have experience with them. If they are not in UTF-8 databases what happens when they pull UTF8 data? I would assume that English/Ascii characters are fine and the greek will end up as junk data. I also ran Oracle Character set scanner (oracle command line utility you use to get info about the affects of a character set conversion). It says that my database will crease in sizez by about 20%. Does this have an affect on users with 3rd party databases? These are customers of our data and there is a limit to how much access I can have to them to run tests. Any information you have would be welcome.

    Read the article

  • Approach for replacing forms authentication in .NET application

    - by Ash Machine
    My question is about an approach, and I am looking for tips or links to help me develop a solution. I have an .NET 4.0 web forms application that works with Forms authentication using the aspnetdb SQL database of users and passwords. A new feature for the application is a new authentication mechanism using single sign on to allow access for thousands of new users. Essentially, when the user logs in through the new single-sign-on method, I will be able to identify them as legitimate users with a role. So I will have something like HttpContext.Current.Session["email_of_authenticated_user"] (their identity) and HttpContext.Current.Session["role_of_authenticated_user"] (their role). Importantly, I don't necessarily want to maintain these users and roles redundantly in the aspnetdb database which will be retired, but I do want to use the session objects above to allow the user to pass through the application as if they were in passing through with forms authentication. I don't think CustomRoleProviders or CustomMemberProviders are helpful since they do not allow for creating session-level users. So my question is how to use the session level user and role that I do have to "mimic" all the forms authentication goodness like enforcing: [System.Security.Permissions.PrincipalPermission(System.Security.Permissions.SecurityAction.Demand, Role = "Student")] or <authorization> <allow users="wilma, barney" /> </authorization> Thanks for any pointers.

    Read the article

  • modifying ajax returned data before showing

    - by Nick
    I'm processing subscribtion form with jQuery/ajax and need to display results with success function (they are different depending on if email exists in database). But the trick is I do not need h2 and first "p" tag. How can I show only div#alertmsg and second "p" tag? I've tried revoming unnecessary elements with method described here, but it didn't work. Thanks in advance. Here is my code: var dataString = $('#subscribe').serialize(); $.ajax({ type: "POST", url: "/templates/willrock/pommo/user/process.php", data: dataString, success: function(data){ var result = $(data); $('#success').append(result); } Here is the data returned: <h2>Subscription Review</h2> <p><a href="url" onClick="history.back(); return false;"><img src="/templates/willrock/pommo/themes/shared/images/icons/back.png" alt="back icon" class="navimage" /> Back to Subscription Form</a></p> <div id="alertmsg" class="error"> <ul> <li>Email address already exists. Duplicates are not allowed.</li> </ul> </div> <p><a href="login.php">Update your records</a></p>

    Read the article

  • Scripts to parse and download iTunes Connect and AppStore data

    - by bradhouse
    I'm looking for recommendations of a script or series of scripts that download and parse iTunes Connect sales data and AppStore comments, ratings and rankings data for a defined app. I'm also aware of solutions like: AppViz appsales-mobile iphone-stats Heartbeat.app I'm sure I'll find a few more with more searching. I can't help but feel there must be a really decent set of open source scripts out there to do this, given how many developers are now writing apps for the AppStore. Would be interested to hear any commercial offerings as well (although my personal preference is for open source, so I can at least see what it is doing with my iTunes Connect login credentials). To be clear, I'm really looking for something that hits all of the areas mentioned: App Store (per store) Comments Ratings Category/store rankings iTunes Connect The contents of the sales reports Analysis/graphs of the data is not necessary (but would be a nice to have I guess). I'm not really looking for something like AppSales Mobile above, I would like the raw data so I can do my own analysis and formatting. So far it looks like AppViz (listed above) is the best out there. Any suggestions on what is good/available or should I just go roll my own?

    Read the article

  • resource-ref at application scope in EJB 2.1 Project

    - by Mike Deck
    Is it possible to define resource references that are applicable to all EJBs in an application? Currently I have an ejb-jar.xml that looks something like this: <ejb-jar> <enterprise-beans> <session id="foo"> <!-- snip --> <resource-ref> <res-ref-name>jdbc/myDatasource</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> </session> <session id="bar"> <!-- snip --> <resource-ref> <res-ref-name>jdbc/myDatasource</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> </session> </enterprise-beans> </ejb-jar> You'll notice that both EJBs have the same resource-ref defined for both of them. Is there a way to factor this duplication out within a J2EE 1.4 application? Ideally I should be able to define the jdbc/myDatasource resource once within the application and have anything running inside that container be able to access it by doing a JNDI lookup for "java:comp/env/jdbc/myDatasource". Is there any way to accomplish this?

    Read the article

  • In what order does DOJO handles ajax requests?

    - by mm2887
    Hi, Using a form in a dialog box I am using Dojo in jsp to save the form in my database. After that request is completed using dojo.xhrPost() I am sending in another request to update a dropdown box that should include the added form object that was just saved, but for some reason the request to update the dropdown is executed before saving the form in the database even though the form save is called first. Using Firebug I can see that the getLocations() request is completed before the sendForm() request. This is the code: Add Team sendForm("ajaxResult1", "addTeamForm"); dijit.byId("addTeamDialog").hide(); getLocations("locationsTeam"); </script> function sendForm(target, form){ var targetNode = dojo.byId(target); var xhrArgs = { form: form, url: "ajax", handleAs: "text", load: function(data) { targetNode.innerHTML = data; }, error: function(error) { targetNode.innerHTML = "An unexpected error occurred: " + error; } } //Call the asynchronous xhrGet var deferred = dojo.xhrPost(xhrArgs); } function getLocations(id) { var targetNode = dojo.byId(id); var xhrArgs = { url: "ajax", handleAs: "text", content: { location: "yes" }, load: function(data) { targetNode.innerHTML = data; }, error: function(error) { targetNode.innerHTML = "An unexpected error occurred: " + error; } } //Call the asynchronous xhrGet var deferred = dojo.xhrGet(xhrArgs); } Why is this happening? Is there way to make the first request complete first before the second executes? To reduce the possibilities of why this is happening I tried setting the cache property in xhrGet to false but the result is still the same. Please help!

    Read the article

  • How to rename an existing Grails application

    - by Johan Pelgrim
    Hi there, Does anybody know how to (easily) "rename" an existing grails application? I'm running into this because my PaaS provider does not allow me to delete a subscription... So I want to deploy my application under a different name. Of course, I can do this manually, but I think it might be a useful 'top-level' script (i.e. "grails rename-app newappname") Manual hints: When I do a "grails create-app myappname" I can see the myappname exists in the following files (and filenames)... Of course this is done by the create-app script, which replaces @...@ tokens in the template. I guess once they are replaced, it's not trivial to do a rename. ./.project: <name>myappname</name> ./application.properties:app.name=myappname ./build.xml:<project xmlns:ivy="antlib:org.apache.ivy.ant" name="myappname" default="test"> ./ivy.xml: <info organisation="org.example" module="myappname"/> ./myappname-test.launch:<stringAttribute key="org.eclipse.jdt.launching.PROJECT_ATTR" value="myappname"/> ./myappname.launch:<listEntry value="/myappname"/> ./myappname.launch:<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry containerPath="org.eclipse.jdt.launching.JRE_CONTAINER" javaProject="myappname" path="1" type="4"/> "/> ./myappname.launch:<stringAttribute key="org.eclipse.jdt.launching.PROJECT_ATTR" value="myappname"/> ./myappname.launch:<stringAttribute key="org.eclipse.jdt.launching.VM_ARGUMENTS" value="-Dbase.dir="${project_loc:myappname}" -Dserver.port=8080 -Dgrails.env=development"/> ./myappname.tmproj: <string>myappname.launch</string> And of course... the top-level directory name is "myappname" Any hints, or information about ongoing initiatives in this area are welcome Greetz, Johan

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >