Search Results

Search found 62141 results on 2486 pages for 'alternate data stream'.

Page 500/2486 | < Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >

  • Site to Site VPN problem, connection succesful data only oneway?

    - by Charles
    To start things off, I'm not the actual Administrator for the VPN Server, but he is also at a loss so I thought I'd ask it here. I know it's a Cisco ASA Firewall/VPN. I have a router that connects to the Cisco VPN server, it does so succesfully. I can ping everything within the remote network and from the remote network into my own. I've been able to SSH into a remote server over VPN as well, it all seems to work; until there's some more data returned. A quick example would be an internal webserver. The default homepage simply redirects, so only sends back HTTP headers with a "Location:". I receive this on my computer, but when I request the actual page then (which isn't that big) I don't get a response at all - it just stalls. And it does this for other services as well, for example SSH. I can do a couple of things while connected, but if there's more than xx output it seems to do nothing. The connection remains active throughout all of this. Has anyone ever experienced anything like this before / know what the problem might be? Another user who has a site-to-site connection with this VPN using the -exact same setup- has no problems, the only difference is that I have around 200ms ping to the VPN server/network because of a very long distance (other continent).

    Read the article

  • XmlDataProvider and XPath bindings don't allow default namespace of XML data?

    - by Andy Dent
    I am struggling to work out how to use default namespaces with XmlDataProvider and XPath bindings. There's an ugly answer using local-name <Binding XPath="*[local-name()='Name']" /> but that is not acceptable to the client who wants this XAML to be highly maintainable. The fallback is to force them to use non-default namespaces in the report XML but that is an undesirable solution. The XML report file looks like the following. It will only work if I remove xmlns="http://www.acme.com/xml/schemas/report so there is no default namespace. <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type='text/xsl' href='PreviewReportImages.xsl'?> <Report xsl:schemaLocation="http://www.acme.com/xml/schemas/report BlahReport.xsd" xmlns:xsl="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.acme.com/xml/schemas/report"> <Service>Muncher</Service> <Analysis> <Date>27 Apr 2010</Date> <Time>0:09</Time> <Authoriser>Service Centre Manager</Authoriser> Which I am presenting in a window with XAML: <Window x:Class="AcmeTest.ReportPreview" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="ReportPreview" Height="300" Width="300" > <Window.Resources> <XmlDataProvider x:Key="Data"/> </Window.Resources> <StackPanel Orientation="Vertical" DataContext="{Binding Source={StaticResource Data}, XPath=Report}"> <TextBlock Text="{Binding XPath=Service}"/> </StackPanel> </Window> with code-behind used to load an XmlDocument into the XmlDataProvider (seems the only way to have loading from a file or object varying at runtime). public partial class ReportPreview : Window { private void InitXmlProvider(XmlDocument doc) { XmlDataProvider xd = (XmlDataProvider)Resources["Data"]; xd.Document = doc; } public ReportPreview(XmlDocument doc) { InitializeComponent(); InitXmlProvider(doc); } public ReportPreview(String reportPath) { InitializeComponent(); var doc = new XmlDocument(); doc.Load(reportPath); InitXmlProvider(doc); } }

    Read the article

  • How do I persist form data across an "access denied" page in Drupal?

    - by Michael T. Smith
    We're building a small sub-site that, on the front page, has a one input box form that users can submit. From there, they're taken to a page with a few more associated form fields (add more details, tag it, etc.) with the first main form field already filled in. This works splendidly, thus far. The problem comes for users that are not logged in. When they submit that first form, they're taken to a (LoginToboggan based) login page that allows them to login. After they login, they redirect to the second form page, but the first main form field isn't filled in -- in other words, the form data didn't persist. How can we store that data and have it persist across the access denied page?

    Read the article

  • Any suggestions on data syncronization from a server to a desktop?

    - by Jamey McElveen
    I have a web based CRM system that stores all the client data from all the clients into one database (MS Sql Server). We need to build a system that maintains a local copy of the data for each client. So basically the client database will have all tables and columns except for the ClientId that is in ever server table. I am aware that I will need to add fields to the server to support synchronization. Are there any good solutions or components already out there to help me accomplish this? We are using MS Sql Server and .NET C#

    Read the article

  • what is the fastest way to copy all data to a new larger hard drive?

    - by SUPER user
    I was certain this would have been covered before, but I cannot find an answer amongst all the almost-duplicates that come up; sorry if I've missed something obvious. I have a full 320gb disk inside my machine, a new 1tb disk to replace it, and a USB 2.0 chassis. It is only data on a single partition, no OS/apps involved, and the old drive will be kept somewhere as backup (no secure wiping etc). The simple option would be to put new disk in USB chassis, copy files, then swap them over. But for USB pen drives, reading is around 4x faster than writing. If the same is true for a USB SATA chassis (is it?) then it would be significantly faster to swap the drives first and read from the old drive over USB, right? Then the other consideration is that copying lots of files is usually slower than a single file of equivalent size. Is Windows 7 smart enough to do everything in a single lump like that, or is there specialised software that should be used instead? (Even if SATA-SATA copying is faster than involving USB, knowing what to do when it isn't an option is useful information.) Summary: Does a USB SATA chassis suffer from a read/write inequality? (like a USB pen drive does, but unlike a direct SATA connection) Can Windows 7 do sequential access? (I can't find confirmation if Robocopy does this.) Or is it necessary to use a bootable CD/USB with something like Clonezilla to achieve sequential copy speeds?

    Read the article

  • Why isn't "String or Binary data would be truncated" a more descriptive error?

    - by rwmnau
    To start: I understand what this error means - I'm not attempting to resolve an instance of it. This error is notoriously difficult to troubleshoot, because if you get it inserting a million rows into a table 100 columns wide, there's virtually no way to determine what column of what row is causing the error - you have to modify your process to insert one row at a time, and then see which one fails. That's a pain, to put it mildly. Is there any reason that the error doesn't look more like this? String or Binary data would be truncated Error inserting value "Some 18 char value" into SomeTable.SomeColumn VARCHAR(10) That would make it a lot easier to find and correct the value, if not the table structure itself. If seeing the table data is a security concern, then maybe something generic, like giving the length of the attempted value and the name of the failing column?

    Read the article

  • How to finish a broken data upload to the production Google App Engine server?

    - by WooYek
    I was uploading the data to App Engine (not dev server) through loader class and remote api, and I hit the quota in the middle of a CSV file. Based on logs and progress sqllite db, how can I select remaining portion of data to be uploaded? Going through tens of records to determine which was and which was not transfered, is not appealing task, so I look for some way to limit the number of record I need to check. Here's relevant (IMO) log portion, how to interpret work item numbers? [DEBUG 2010-03-30 03:22:51,757 bulkloader.py] [Thread-2] [1041-1050] Transferred 10 entities in 3.9 seconds [DEBUG 2010-03-30 03:22:51,757 adaptive_thread_pool.py] [Thread-2] Got work item [1071-1080] <cut> [DEBUG 2010-03-30 03:23:09,194 bulkloader.py] [Thread-1] [1141-1150] Transferred 10 entities in 4.6 seconds [DEBUG 2010-03-30 03:23:09,194 adaptive_thread_pool.py] [Thread-1] Got work item [1161-1170] <cut> [DEBUG 2010-03-30 03:23:09,226 bulkloader.py] [Thread-3] [1151-1160] Transferred 10 entities in 4.2 seconds [DEBUG 2010-03-30 03:23:09,226 adaptive_thread_pool.py] [Thread-3] Got work item [1171-1180] [ERROR 2010-03-30 03:23:10,174 bulkloader.py] Retrying on non-fatal HTTP error: 503 Service Unavailable

    Read the article

  • How to write a cctor and op= for a factory class with ptr to abstract member field?

    - by Kache4
    I'm extracting files from zip and rar archives into raw buffers. I created the following to wrap minizip and unrarlib: Archive.hpp #include "ArchiveBase.hpp" #include "ArchiveDerived.hpp" class Archive { public: Archive(string path) { /* logic here to determine type */ switch(type) { case RAR: archive_ = new ArchiveRar(path); break; case ZIP: archive_ = new ArchiveZip(path); break; case UNKNOWN_ARCHIVE: throw; break; } } Archive(Archive& other) { archive_ = // how do I copy an abstract class? } ~Archive() { delete archive_; } void passThrough(ArchiveBase::Data& data) { archive_->passThrough(data); } Archive& operator = (Archive& other) { if (this == &other) return *this; ArchiveBase* newArchive = // can't instantiate.... delete archive_; archive_ = newArchive; return *this; } private: ArchiveBase* archive_; } ArchiveBase.hpp class ArchiveBase { public: // Is there any way to put this struct in Archive instead, // so that outside classes instantiating one could use // Archive::Data instead of ArchiveBase::Data? struct Data { int field; }; virtual void passThrough(Data& data) = 0; /* more methods */ } ArchiveDerived.hpp "Derived" being "Zip" or "Rar" #include "ArchiveBase.hpp" class ArchiveDerived : public ArchiveBase { public: ArchiveDerived(string path); void passThrough(ArchiveBase::Data& data); private: /* fields needed by minizip/unrarlib */ // example zip: unzFile zipFile_; // example rar: RARHANDLE rarFile_; } ArchiveDerived.cpp #include "ArchiveDerived.hpp" ArchiveDerived::ArchiveDerived(string path) { //implement } ArchiveDerived::passThrough(ArchiveBase::Data& data) { //implement } Somebody had suggested I use this design so that I could do: Archive archiveFile(pathToZipOrRar); archiveFile.passThrough(extractParams); // yay polymorphism! How do I write a cctor for Archive? What about op= for Archive? What can I do about "renaming" ArchiveBase::Data to Archive::Data? (Both minizip and unrarlib use such structs for input and output. Data is generic for Zip & Rar and later is used to create the respective library's struct.)

    Read the article

  • Playing repeated sound in Java

    - by Diogo Schneider
    I'm trying to play sounds in a Java game with the following code: AudioStream audioStream = new AudioStream(stream); AudioPlayer.player.start(audioStream); The stream variable is just an InputStream to the resource. By the first time this code is called, the sound is played as expected, but by the second time the program just hangs, not even an exception is thrown. I don't know what's going on or how to prevent this. If I try closing either stream or audioStream after the above code, the program doesn't hang, but no sound is ever played at all. Any tips are welcome, thanks.

    Read the article

  • Bitmap to Texture2D problem with colors

    - by xnaNewbie89
    I have a small problem with converting a bitmap to a Texture2D. The resulted image of the conversion has the red channel switched with the blue channel :/ I don't know why, because the pixel formats are the same. If someone can help me I will be very happy :) System.Drawing.Image image = System.Drawing.Bitmap.FromFile(ImageFileLoader.filename); System.Drawing.Bitmap bitmap = new System.Drawing.Bitmap(image); Texture2D mapTexture = new Texture2D(Screen.Game.GraphicsDevice, bitmap.Width, bitmap.Height,false,SurfaceFormat.Color); System.Drawing.Imaging.BitmapData data = bitmap.LockBits(new System.Drawing.Rectangle( 0, 0, bitmap.Width, bitmap.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly,System.Drawing.Imaging.PixelFormat.Format32bppArgb); byte[] bytes = new byte[data.Height * data.Width*4]; System.Runtime.InteropServices.Marshal.Copy(data.Scan0, bytes, 0, bytes.Length); mapTexture.SetData<byte>(bytes, 0, data.Height * data.Width * 4); bitmap.UnlockBits(data); bitmap.Dispose(); image.Dispose();

    Read the article

  • Is there a way to pass a custom type from C# to Oracle using System.Data.OracleClient?

    - by Frankie Simon
    I was searching online for a solution to passing a string array to a stored procedure in Oracle. The "easy" but messy way was using a comma delimited string. I had found a sample based on which I created my own type of TABLE of VARCHAR2(200). I understood that I could use a constructor created "behind-the-scenes" by Oracle to give a list of values that would, in the PL/SQL, be treated as a TABLE I could iterate through. But when I got to the C# I saw that there is no way for me to create an OracleParameter object that would allow me to use this implicit constructor. All the samples I'm finding online for now are dealing with Oracle Data Adapter, none say anything about System.Data.OracleClient. Is there a way to achieve this?

    Read the article

  • It is possible to record a data that have a straight row in mysql based on date or sequence?

    - by user1987816
    I want to get the data that have a straight Sell more than 3 times, it is possible in mysql? If not, how to get it right? I'm need it on mysql or php. my database:- +----------+---------------------+--------+ | Username | Date | Action | +----------+---------------------+--------+ | Adam | 2014-08-20 22:30:20 | Sell | | Adam | 2014-08-20 22:30:20 | Sell | | Adam | 2014-08-20 22:30:20 | Sell | | Adam | 2014-08-20 22:30:20 | Buy | | Adam | 2014-08-20 22:30:20 | Buy | | Adam | 2014-08-20 22:30:20 | Sell | | Adam | 2014-08-20 22:30:20 | Sell | | Adam | 2014-08-20 22:30:20 | Sell | | Adam | 2014-08-20 22:30:20 | Sell | | Nick | 2014-08-20 22:30:20 | Sell | | Nick | 2014-08-20 22:30:20 | Sell | | Nick | 2014-08-20 22:30:20 | Sell | | Nick | 2014-08-20 22:30:20 | Sell | | Nick | 2014-08-20 22:30:20 | Buy | +----------+---------------------+--------+ From the table above, I need to list out all data that have a straight sell more then 3 times. RESULT +----------+---------------------+--------+-------------+ | Username | Date | Action | Straight 3+ | +----------+---------------------+--------+-------------+ | Adam | 2014-08-20 22:30:20 | Sell | 3 | | Adam | 2014-08-20 22:30:20 | Sell | 4 | | Nick | 2014-08-20 22:30:20 | Sell | 4 | +----------+---------------------+--------+-------------+

    Read the article

  • How can I store data in a table as a trie? (SQL Server)

    - by Matt
    Hi, To make things easier, the table contains all the words in the English dictionary. What I would like to do is be able to store the data as a trie. This way I can traverse the different branches of the trie and return the most relevant result. First, how do I store the data in the table as a trie? Second, how do I traverse the tree? If it helps at all, the suggestion in this previous question is where this question was sparked from. Please make sure it's SQL we're talking about. I understood the Mike Dunlavey's C implementation because of pointers but can't see how this part (The trie itself) works in SQL. Thanks, Matt

    Read the article

  • How to move my data from my old MacBook Pro to my new one?

    - by Tim Büthe
    I just purchased a new MacBook Pro and already got an 2008 model. I wonder how I move all my data over to the new one. My first idea was, to use my Time Machine backup and restore from it, which seems to be a good idea and should work just fine regarding to this link: http://blog.duncandavidson.com/2008/01/restoring-from-time-machine.html. But, since my current MacBook got older Software on it, like iLife '08 instead of iLife '09 I would have to upgrade this afterwards. Is this correct, or does Time Machine does some magic to exclude well known software? And is it possible to reinstall or upgrade iLife with the included installation DVDs? My second idea is, to just swap the hard drives instead of using the Time machine backup. If it is not too complicated to remove the hdd, this should be the fastest way. This also has the benefit, that the 2008er MacBook then contains a brand new installation and I don't have to remove all my stuff or reinstall Mac OS before I give it away. My question on that second idea would be: does snow leopard handle this stuff correctly? I reboot with the new hardware and all just works fine? So in a nutshell: What would you do: restore from backup or swap drives? And what about the new software?

    Read the article

  • problem in adding data to an array in Objective-C!!!

    - by anurag
    I am grappling with adding an NSData object to a NSMutable array. The code is executing fine but it is not adding the objects in the array.The code is as follows: NSData * imageData = UIImageJPEGRepresentation(img, 1.0); int i=0; do{ if([[tempArray objectAtIndex:i] isEqual:imageData]) { [tempArray removeObjectAtIndex:i]; } else { [tempArray addObject:imageData]; //NSLog(@"ANURAG %@",[tempArray objectAtIndex:0]); } }while(i<[tempArray count]) ; The NSLog statement shows the object added is null however the value of imageData is not null. I have defined the tempArray as a static memeber of the class in which this code is written. Is it because of the size of the data object as it is the data of an image????

    Read the article

  • Best Way to transfer data From one page to another In asp.net.

    - by Anish Karunakaran
    In my scenario am using java script popup that popups another form Which is an address Entry form of merely 20 Controls in it. Now am retrieving data from address page to main page by using a session variable which stores a data table of values. Two different Session variables are this way used for permanent and temporary addresses.. Using session variable degrades the performance i know.. What is the best way to transfer value from one page to another. Regards anish.

    Read the article

  • Data structure in c for fast look-up/insertion/removal of integers (from a known finite domain)

    - by MrDatabase
    I'm writing a mobile phone based game in c. I'm interested in a data structure that supports fast (amortized O(1) if possible) insertion, look-up, and removal. The data structure will store integers from the domain [0, n] where n is known ahead of time (it's a constant) and n is relatively small (on the order of 100000). So far I've considered an array of integers where the "ith" bit is set iff the "ith" integer is contained in the set (so a[0] is integers 0 through 31, a[1] is integers 32 through 63 etc). Is there an easier way to do this in c?

    Read the article

  • Quickbooks integration: IPP/IDS: can these by used for actual data exchange?

    - by Parand
    Poking around options for integrating an online app with Quickbooks, I've made a lot of headway with QBWC, but it's fairly ugly. From an end user perspective the usability of QBWC is pretty low. Intuit is now pushing Intuit Partner Platform (IPP) and Intuit Data Services (IDS). I can't quite figure out what these are about: Is IPP limited to using Flex, or can it work with existing web apps? Are there APIs for actual data exchange? Is it possible to interact with desktop Quickbooks using IPP or IDS? If there is sample code, particularly in Python, some pointers would be great.

    Read the article

  • How can I change the default location/action of 'Open Outlook Data File' in Outlook 2010?

    - by Chadddada
    I have recently deployed a Remote Desktop Host server that functions as a remote Microsoft Office 2010 work space for users. In part of the locking down of this server I have installed all programs on the D: drive and, through the use of Group Policy, hidden all the drives on the server from standard users. In addition to hiding these drives I am not allowing users to save anything locally (on the server) or open Libraries. However one of the functions of the server is to provide the Outlook client. Often users will have the .PST file stored on a network location and want to open this in Outlook. Can I change the default action or location that File Open Open Outlook Data File looks or tries to pull the file from? The default location seems to be under Users / Libraries. When click 'Open' you get a warning: This operation has been cancelled due to restrictions in effect on this computer. Clicking OK drops the user into a small menu that shows attached network drives under Computer. Can I instead have the 'Open' click drop the users in a defined network drive or just open computer and allow them to select a share? I don't want them to see the error message. A solution that looks to have been used for Office 2000/03 is: Key: HKEY_CURRENT_USER\Software\Microsoft\Office\<version>\Outlook Value name: ForceOSTPath Value type: REG_EXPAND_SZ Value: path to your storage folder I am not sure if there is a better way to do this now OR if this even works with Office 2010.

    Read the article

  • How can I calculate data for a boxplot (quartiles, median) in a Ralis app on Heroku? ( Heroku uses P

    - by hadees
    I'm trying to calculate the data needed to generate a box plot which means I need to figure out the 1st and 3rd Quartiles along with the median. I have found some solutions for doing it in Postgresql however they seem to depend on either PL/Python or PL/R which it seems like Heroku does not have either enabled for their postgresql databases. In fact I ran "select lanname from pg_language;" and only got back "internal". I also found some code to do it in pure ruby but that seems somewhat inefficient to me. I'm rather new to Box Plots, Postgresql, and Ruby on Rails so I'm open to suggestions on how I should handle this. There is a possibility to have a lot of data which is why I'm concerned with performance however if the solution ends up being too complex I may just do it in ruby and if my application gets big enough to warrant it get my own Postgresql I can host somewhere else. *note: since I was only able to post one link, cause I'm new, I decided to share a pastie with some relevant information

    Read the article

  • XBRL - Moving from Production to Consumption

    - by jmorourke
    Here's an update on what’s new with XBRL and how it can actually benefit your organization versus adding extra time and costs to financial reporting.  On February 29th (leap day) of 2012 I attended the XBRL and Financial Analysis Technology Conference at Baruch College in NYC.  The event, which attracted over 300 XBRL gurus and fans was presented by XBRL US, The New York Society of Security Analysts’ Improved Corporate Reporting Committee, and Baruch College’s Robert Zicklin Center for Corporate Integrity.  The event featured keynotes from the U.S. Securities and Exchange Commission (SEC), and the CFA Institute as well as panels covering alternative research tools and data, corporate reporting to stakeholders and a demonstration of XBRL analysis tools.  The program culminated in a presentation of the finalists and the winner of the $20,000 XBRL Challenge.    Some of the key points made in the sessions included: The focus of XBRL tools is moving from production to consumption. As of February 2012, over 9000 companies are reporting in XBRL, with over 10 million facts filed to date XBRL taxonomy extensions have dropped from 27% to 11% making comparisons easier The SEC reports that XBRL makes it easier to analyze disclosures, focus on accounting issues XBRL is helping standards-setters like the FASB speed their analysis of impacts of proposed accounting rule changes Companies like Thomson Reuters report that XBRL is helping speed the delivery of data to clients The most interesting part of the program though, was the session highlighting the 5 finalists in the XBRL Challenge competition and the winning solution.  The XBRL Challenge was launched in 2011 as a means of spurring the development of more end-user tools to help with the consumption of XBRL-based financial information.       Over an 8-month process handled by 5 judges, there were 84 registrants, 15 completed submissions, 5 finalists and one winner of the challenge.  All of the solutions are open-sourced tools and most of them focus on consuming XBRL-based data.  The 5 finalists included: Advanced XBRL Processing from Oxide solutions – XBRL viewer for taxonomies, filings and company data with peer comparison capabilities. Arrelle – API for XBRL processes, supports SEC Validations, RSS Feeds to access filings etc. Calcbench – XBRL data analysis tool that can be embedded in other web applications.  This tool can combine XBRL filings with real-time market data. XBRL to XL – allows the importing of XBRL data into Microsoft Excel for analysis, comparisons.  Users start on the web and populate Excel with XBRL data. XBurble – allows users to search and view XBRL filings, export to Excel, merge for comparison, and includes a workflow interface. The winner of the $20,000 XBRL Challenge prize was CalcBench.  More information about the XBRL Challenge and the finalists can be found at www.XBRLUS.org/challenge XBRL for Sustainability Reporting – other recent news on the XBRL front was the announcement by the Global Reporting Initiative (GRI) of an XBRL taxonomy for Sustainability Reporting.  This taxonomy was co-developed by the GRI and Deloitte and is designed to make the consumption of data found in Sustainability Reports much easier.  Although there is no government mandate to file Sustainability Reports in XBRL format, organizations that do use the GRI guidelines for Sustainability Reporting are encouraged to tag and submit their data voluntarily to the GRI – who will populate a database with Sustainability Reporting data and make this available to the public.  For more information about this initiative, you can go to the GRI web site:  www.globalreporting.org. So how does all of this benefit corporate filers and investors?  Since its introduction, the consensus in the market is that XBRL has mainly benefited the regulators and investment analysts who need to consume and analyze large volumes of financial data.  But with the emergence of more end-user tools for consuming and analyzing XBRL-based data, and the ability to perform quick comparisons of one company versus its peers and competitors in an industry group, will soon accelerate the benefits to corporate finance staff, as well as individual investors.  This could apply to financial results tagged in XBRL, as well as non-financial information such as Sustainability Reporting – which over the long-term will likely be integrated with financial reporting.   And as multiple regulators and agencies in a country adopt the XBRL standard for corporate filings, more benefits will accrue as companies will be able to leverage one set of XBRL-based financial data for multiple regulatory filings.     For more information about the latest developments in XBRL, check out the XBRL US or XBRL International web sites:  www.xbrl.org, www.xbrlus.org. For more information about what Oracle is doing to support XBRL, here are some links: http://www.oracle.com/us/solutions/ent-performance-bi/disclosure-management-065892.html http://www.oracle.com/technetwork/database/features/xmldb/index-087631.html Feel free to contact me if you have any questions or need more information:  [email protected]

    Read the article

  • Plugin or module for filtering/sorting a large amount of data?

    - by prometheus
    I have a rather large amount of data (100 MB or so), that I would like to present to a user. The format of the data is similar to the following... Date              Location      Log File          Link 03/21/2010   San Diego   some_log.txt   http://somelink.com etc My problem is that I would like to have some nice/slick way for the user to filter the information. Unfortunately, because there is so much of it, the jQuery Table Filter plugin does not work (crashes the browser). I was wondering if there is a nice solution or if I have to simply do the filtering on the server end and have a bland pull-down menu / select-box interface for the client to use.

    Read the article

  • Strategy for storing and displaying form dropdown data for provinces, states, prefixes?

    - by meder
    I'm currently migrating from a class that stores lists of countries, states & provinces in the form of arrays to using Zend's Locale data in the form of ldml xml files. These ldml files provide localised lists of countries, currencies, languages - so I'm not exactly sure where I should store US States, ( Canadian Provinces ), Prefixes - I was thinking possibly just create a generic xml file and store it in the same directory as the ldml files, but having doubts because it wouldn't really be localised as I'd store it in English. Should I go with storing it in a generic xml file, or possibly update each of the locale files ( eg en.xml ) and append them? The latter is probably not worth the work, which is why I'm swaying towards just a general.xml or dropdown-data.xml. As for generating dropdown options, I suppose I could just say, grab all US states, append the array with Canadian provinces, and append that with an 'Other' option - does this seem like the right way to go about?

    Read the article

< Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >