Search Results

Search found 59943 results on 2398 pages for 'storing data'.

Page 424/2398 | < Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >

  • Storing and displaying unicode string (??????) using PHP and MySQL

    - by Anirudh Goel
    I have to store hindi text in a MySQL database, fetch it using a PHP script and display it on a webpage. I did the following: I created a database and set its encoding to UTF-8 and also the collation to utf8_bin. I added a varchar field in the table and set it to accept UTF-8 text in the charset property. Then I set about adding data to it. Here I had to copy data from an existing site. The hindi text looks like this: ????????:05:30 I directly copied this text into my database and used the PHP code echo(utf8_encode($string)) to display the data. Upon doing so the browser showed me "??????". When I inserted the UTF equivalent of the text by going to "view source" in the browser, however, ???????? translates into &#2360;&#2370;&#2352;&#2381;&#2351;&#2379;&#2342;&#2351;. If I enter and store &#2360;&#2370;&#2352;&#2381;&#2351;&#2379;&#2342;&#2351; in the database, it converts perfectly. So what I want to know is how I can directly store ???????? into my database and fetch it and display it in my webpage using PHP. Also, can anyone help me understand if there's a script which when I type in ????????, gives me &#2360;&#2370;&#2352;&#2381;&#2351;&#2379;&#2342;&#2351;? Solution Found I wrote the following sample script which worked for me. Hope it helps someone else too <html> <head> <title>Hindi</title></head> <body> <?php include("connection.php"); //simple connection setting $result = mysql_query("SET NAMES utf8"); //the main trick $cmd = "select * from hindi"; $result = mysql_query($cmd); while ($myrow = mysql_fetch_row($result)) { echo ($myrow[0]); } ?> </body> </html> The dump for my database storing hindi utf strings is CREATE TABLE `hindi` ( `data` varchar(1000) character set utf8 collate utf8_bin default NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `hindi` VALUES ('????????'); Now my question is, how did it work without specifying "META" or header info? Thanks!

    Read the article

  • Smile sort son « Guide Michelin » 2013 de l'Open-Source, le Livre Blanc gratuit s'enrichit de rubriques sur le Cloud et le Big Data

    Smile sort son « Guide Michelin » 2013 de l'Open-Source Le Livre Blanc gratuit s'enrichit de rubriques sur le Cloud et le Big Data Pour l'édition 2013 de son Guide de référence sur l'open source, Smile a enrichit son Livre Blanc (285 pages) d'une trentaine de nouvelles solutions et de deux nouvelles rubriques (Cloud et Big Data). Plus 300 solutions y sont recensées (dont 200 évaluées dans le détail) dans plus de 40 domaines d'applications, répartis en trois « dimensions » (Infrastructure, Développement et couches intermédiaires, Applications). Le livre se présente sous la forme de fiches de présentation (version du produit, site web, aute...

    Read the article

  • Architecture for Social Graph data that has a Time Frame Associated?

    - by Jay Stevens
    I am adding some "social" type features to an existing application. There are a limited # of node & edge types. Overall the data itself is relatively small (50,000 - 70,000 for each type of node) there will be a number of edges (relationships) between them (almost all directional). This, I know, is relatively easy to represent with an SDF store (such as BrightstarDB) or something like Microsoft's Trinity (or really many of the noSQL options). The thing that, I think, makes this a unique use case is that each relationship will have a timeframe associated with it (start and end dates). Right now, I'm thinking of just storing this in a relational structure and dealing with the headaches of "traversing the graph", but I'm looking for suggestions on a better approach (both in terms of data structure and server): Column ================ From_Node_ID Relationship To_Node_ID StartDate EndDate Any suggestions or thoughts are welcomed.

    Read the article

  • How do multi-platform games usually store save data?

    - by PixelPerfect3
    I realize this is a bit of a broad question, but I was wondering if there is a "standard" in the industry when it comes to storing save data for games (and is it different across platforms - Xbox/PS/PC/Mac/Android/iOS?) For example for a game like Assassin's Creed or The Walking Dead: They are on multiple platforms and they usually have to save enough information about the player and their actions. Do they use something like XML files, databases, or just straight binary dumps? How much does it differ from platform to platform? I would appreciate it if someone with experience in the game industry would answer this.

    Read the article

  • Ensure your view and function meta data is upto date.

    - by simonsabin
    You will see if you use views and functions that SQL Server holds the rowset metadata for this in system tables. This means that if you change the underlying tables, columns and data types your views and functions can be out of sync. This is especially the case with views and functions that use select * To get the metadata to be updated you need to use sp_refreshsqlmodule. This forces the object to be “re run” into the database and the meta data updated. Thomas mentioned sp_refreshview which is a...(read more)

    Read the article

  • Can I change a linux swap partition to a file system without losing data?

    - by Diogo
    so I am using ubuntu 11.10, just changed from windows to ubuntu, and one of my file systems, I accidentally set it up as a linux swap partition and now I want to acces my files, as I did in windows, but I do not know any secure way to change that partition to a file system without formating it and loosing the data. Because I do not want to loose the data I have there. Does deleting the partition and mounting it again solve the problem?

    Read the article

  • Why do my Application Compatibility Toolkit Data Collectors fail to write to my ACT Log Share?

    - by Jay Michaud
    I am trying to get the Microsoft Application Compatibility Toolkit 5.6 (version 5.6.7320.0) to work, but I cannot get the Data Collectors to write to the ACT Log Share. The configuration is as follows. Machine: ACT-Server Domain: mydomain.example.com OS: Windows 7 Enterprise 64-bit Edition Windows Firewall configuration: File and Printer Sharing (SMB-In) is enabled for Public, Domain, and Private networks ACT Log Share: ACT Share permissions*: Group/user names Allow permissions --------------------------------------- Everyone Full Control Administrator Full Control Domain Admins Full Control Administrators Full Control ANONYMOUS LOGON Full Control Folder permissions*: Group/user name Allow permissions Apply to ------------------------------------------------- ANONYMOUS LOGON Read, write & execute This folder, subfolders, and files Domain Admins Full control This folder, subfolders, and files Everyone Read, write & execute This folder, subfolders, and files Administrators Full control This folder, subfolders, and files CREATOR OWNER Full control Subfolders and files SYSTEM Full control This folder, subfolders, and files INTERACTIVE Traverse folder / This folder, subfolders, and files execute file, List folder / read data, Read attributes, Read extended attributes, Create files / write data, Create folders / append data, Write attributes, Write extended attributes, Delete subfolders and files, Delete, Read permissions SERVICE (same as INTERACTIVE) BATCH (same as INTERACTIVE) *I am fully aware that these permissions are excessive, but that is beside the point of this question. Some of the clients running the Data Collector are domain members, but some are not. I am working under the assumption that this is a Windows file sharing permission issue or a network access policy issue, but of course, I could be wrong. It is my understanding that the Data Collector runs in the security context of the SYSTEM account, which for domain members appears on the network as MYDOMAIN\machineaccount. It is also my understanding from reading numerous pieces of documentation that setting the ANONYMOUS LOGON permissions as I have above should allow these computer accounts and non-domain-joined computers to access the share. To test connectivity, I set up the Windows XP Mode virtual machine (VM) on ACT-Server. In the VM, I opened a command prompt running as SYSTEM (using the old "at" command trick). I used this command prompt to run explorer.exe. In this Windows Explorer instance, I typed \ACT-Server\ACT into the address bar, and then I was prompted for logon credentials. The goal, though, was not to be prompted. I also used the "net use /delete" command in the command prompt window to delete connections to the ACT-Server\IPC$ share each time my connection attempt failed. I have made sure that the appropriate exceptions are Since ACT-Server is a domain member, the "Network access: Sharing and security model for local accounts" security policy is set to "Classic - local users authenticate as themselves". In spite of this, I still tried enabling the Guest account and adding permissions for it on the share to no effect. What am I missing here? How do I allow anonymous logons to a shared folder as a step toward getting my ACT Data Collectors to deposit their data correctly? Am I even on the right track, or is the issue elsewhere?

    Read the article

  • Automatic storing package before installing it on .deb based system?

    - by macias
    The reason I am asking this question is I am concerned about simple rollback (I already read how to find out what packages were installed). So I would like to set global (per entire system) option, that forces system to store each package before installing/updating it. With such workflow, I could update whatever I want, and if for example the newest version of Dolphin would be worse than previous one I could simply go to directory with stored packages and install previous version instead (the previous version is either base version -- on ISO -- or version from previous update). Is there such feature as global option to automatically store each package before install? It have to be guaranteed that no package is updated on-fly, i.e. without being stored before. I am learning LMDE, but answer for any .deb based system would be fine -- Ubuntu, Debian, you name it.

    Read the article

  • What could be the best way to generalize data from Facebook and Twitter?

    - by Sjaak van der Heide
    I am not sure if this is the best subsite to ask this question, but I'm pretty sure it doesn't fit on the normal or facebook SO page... I've been asked to make a general API for connecting to several Social Media platforms (at the moment Facebook and Twitter). I have already realised both of them seperately. Meaning I retrieve the data I need from both Facebook and Twitter and hold the data in it's own dataclass. In my case a list of FacebookTimelineItems and a list of TwitterTimelineItems. now the hard part is taking the parts that are used in both (username, id, message and such) and make 1 general class that is eventually passed on to who/whatever sent the call to my API. these are two pics of the data classes I have: http://imageshack.us/photo/my-images/703/facebookdata.png/ http://imageshack.us/photo/my-images/204/twitterdata.png/ probably not 100% correct but it gives an idea what it looks like. Now I've been having several idea about how to go about and generalize the two, which is harder then I thought at first. Create an interface (TimelineItem) and let the other classes extend that one. this way I'll always be sure I have a class that contains at least the basic info I need. downside is that deserializing the JSON seems to be a nightmare. Use the two dataclasses I have and combine them into a new class afterwards, then pass that one back to whoever requested it. This would probably work but I get the idea it's not the best way to tackle this problem, and is pretty dodgy IF I get it working. Or, in case of the other two being nearly impossible. Keep the two seperated in the front end, and go sit in the corner crying because I've just figured out you can't lump together facebook and twitter... Note: I don't have to make the front end part (view), I just make sure the Model is nicely filled with data :) I hope I placed this in the right section, if I didn't I apologise and would like to know where I should go with my question. Thanks in advance for any replied/ideas/opinions on this.

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • How to save and retrieve data as key-value pairs or files in isolated storage?

    - by kaleidoscope
    One can use isolated storage to store data locally on the user's computer. There are two ways to use isolated storage. The first way is to save or retrieve data as key/value pairs by using the IsolatedStorageSettings class. The second way is to save or retrieve files by using the IsolatedStorageFile class. More details can be found at http://silverlight.net/learn/quickstarts/isolatedstorage/   Rituraj, J

    Read the article

  • How to install software that "Requires installation of untrusted packages"?

    - by user135682
    I am using Ubuntu 12.10. When I try to update the software, it shows me this error: Requires installation of untrusted packages This requires installing packages from unauthenticated sources. Details kalzium-data kanagram kate-data kde-l10n-engb kde-l10n-zhcn kde-runtime-data kdegames-data klettres-data libakonadi-kabc4 liblxc0 libwildmidi-config libwildmidi1 lxc marble-data nepomuk-core-data parley-data tomboy

    Read the article

  • Recovering data from failed Raid configuration with 4 drives and two raid sets (Asus P6T / Intel ICH10r)

    - by user56365
    I've added the complete detailed version for my question below for those who can help, but want to quickly summarize my question first. I setup two Raid arrays using (4) WD Raptors, a striped set for the OS and 1+0 set for crucial data. After booting once out of the 50 times a cable fell out, the drive wasn't recognized in the array anymore. After trying to fix it, another drive did the same. I now have two drives remaining, luckily with the parity information. I know the striped set is gone, but I need the data on the other set. Can anyone recommend anything to recover the data, or fix the two drives that doesn't allow the raid controller to recognize the drives, even though they are listed on the utility screen as still apart of the configuration but that they are not found? More Details I recently upgraded to a ASUS P6T motherboard with an Intel ICH10R raid controller and changed my previous 4 drive raid array from strictly a Raid 1+0 set to a Raid 0 for the OS/Page/Scratch drive and a Raid 1+0 set for crucial data. I never had problems after upgrading with my configuration, even when a drive died and was replaced. I managed to rebuild the array fine. Unfortunately this time around, a cable came unattached and I booted my system up until the raid status screen with the degraded error. This shouldn't have been a problem, but after I attached the drive it was no longer recognized as a member in the array. Both drives actually show up as a non-member disk. I've spent a very, very long time online trying to find information or support and haven't had much luck. After spending time trying to scan the drive for errors, damaged partition info, etc.. another drive in the set decided it didn't want to be recognized as a part of the array. At this point, I have two out of the four drives still functioning, but the Raid 1+0 array went from degraded to failed and I must find a way to retrieve that data. I think the two drives still in the array have the parity information because they show up as OS (110GB),BACKUP(80GB) and OS:1(110GB),BACKUP(80GB) under windows data management. The other two are simply 74gb Raw unallocated Is it possible recover the data using those two drive only, and which tool would I use? Could it be a simple partition table or any other error that is repairable with hard drive utilities out there? I know the Raid 0 set is done for, but I would assume because the correct drives failed in a 1+0 config to save the data I can retrieve it some how.

    Read the article

  • When does Information become Data? (i.e. Information wants to be free) [closed]

    - by James P. Wright
    I hear Programmers often talk about how Information Wants To Be Free which I mostly agree with, but the thing that people don't often pay attention to is that Information and Data are not the same thing. Should Data also be free? Does that mean all of you should have full access to my Social Security Number and other personal "information"? Where is the limit? If there is a limit, why do people throw this phrase around like it fits every circumstance (like this one)

    Read the article

  • Google ou le data warehouse mondial : Partage de connaissance ou possession du marché mondiale ?

    Google ou le data warehouse mondiale Partage de connaissance ou possession du marché mondiale ? Google a annoncé la mise en ligne de données supplémentaires nommé World Bank sur son outil public data explorer Via cet outil, vous trouvez toutes les informations mondiales concernant l'agriculture, la consommation électrique par capital, l'émission de CO2 par capital, le nombre d'utilisateurs d'internet,... Ainsi, vous avez par différents graphiques, des statistiques sur toutes les capitales dont vous pourrez comparer les différentes informations propres à ...

    Read the article

  • What's a good web-based application for storing your book collection?

    - by JJarava
    Hi all! I'm looking for a (better if web-based; i.e., something I can install in my home server on my LAN) software to keep track of my books/DVDs/etc. I already know of Readerware offerings, which I find quite interesting, but I'd like something that is Web-based, so I can run it on my MacMini on the living room, and access it from any of the comuputers in the house. I've been googling around, and I've been quite surprised to NOT find any clear option. Alternatively, good "native" software for Windows/MacOSX will be more than welcome. Thanks a lot PS: Given the # of interesting suggestions for Web 2.0, ASP-Hosted type sites, I've clarified the question a bit: I'd prefer some software that I can install and use in my systems, not something "in the cloud" (although I'll check the suggestions out!)

    Read the article

  • Facebook sort Presto, son moteur de requêtes open source pour le big data, qui serait dix fois plus performant que celui de Hadoop

    Facebook sort Presto, son moteur de requêtes open source pour le big data qui serait dix fois plus performant que celui de HadoopDe nombreuses entreprises comme Facebook dépendent du Big data. Dans le domaine, on compte la paire Hadoop/Hive parmi les références. Pour rappel, Hive c'est le moteur de requêtes populaire pour Hadoop. Cependant, il se pourrait que le MapReduce élément essentiel sur lequel repose Hive ne soit pas optimisé pour des situations ou la quantité de données excède un certain...

    Read the article

  • How are large companies handling the storing and cataloging of software installation disks?

    - by CT
    I just started working in the IT department of a small-medium sized construction company with about 200 users. One of my responsibilities is to setup and configure all new machines that come in. I would like suggestions on how to best manage the installation disks and licenses of the software that comes with them. Plus any additional licensed software such as Autocad, Photoshop, etc as well as peripheral driver disks such as printers and scanners. Right now every machine is associated and labeled with an asset id. All asset ids are kept in a spreadsheet with applicable serial numbers, current user, warranty info, and software licenses. The physical disks are then kept within a folder in a cabinet. Each folder is marked with the asset id number as well as the current user. My problems with this is that the system was not maintained very completely before I came to the company. There are plenty of software folders with no asset ids labeled on them. Plenty of missing software folders (most likely are a lot of the unlabeled folders). Folders with names but not asset ids. Machines get switched to different users without the folders and spreadsheet being updated. I am not saying this method would necessarily be bad if it was better implemented and managed, but if I am going to have to take a lot of time to fix this system currently in place. I thought I would ask the community first on how others manage this process in case there are easier, more efficient ways of doing so. Thank you.

    Read the article

  • Windows Server 2003 - Are ODBC Data Source's set per-user?

    - by Jakobud
    When I'm logged into our Windows Server 2003 server, I don't see any ODBC Data Sources, but when a different user logs in (who doesn't have Administrative rights), they have a big list of ODBC Data Sources. Are ODBC Data Sources set on a per-user basis? How come the Administrator can't see user's ODBC Data Sources?

    Read the article

  • Les Bouches-du-Rhône sponsorisent des développeurs pour promouvoir l'open data : avez-vous proposé votre application ?

    Les Bouches-du-Rhône lancent une opération de sponsoring pour les développeurs d'applications et de sites web Afin de promouvoir l'open-data et la région, Developpez.com partenaire Depuis le mois d'avril, l'association Bouches-du-Rhône Tourisme a rejoint la démarche Open-data en ouvrant son portail et en libérant une centaine de jeux de données liées au tourisme dans le département. Toutes les informations détenues par l'association ont été rendues disponibles et exploitables par les développeurs, les scientifiques, les associations, les étudiants et les entreprises. Bref, par tout le monde. On y trouve par exemple...

    Read the article

  • Les Bouches-du-Rhône lancent une opération de sponsoring de développeurs pour promouvoir l'open-data et la région

    Les Bouches-du-Rhône lancent une opération de sponsoring pour les développeurs d'applications et de sites web Afin de promouvoir l'open-data et la région, Developpez.com partenaire Depuis le mois d'avril, l'association Bouches-du-Rhône Tourisme a rejoint la démarche Open-data en ouvrant son portail et en libérant une centaine de jeux de données liées au tourisme dans le département. Toutes les informations détenues par l'association ont été rendues disponibles et exploitables par les développeurs, les scientifiques, les associations, les étudiants et les entreprises. Bref, par tout le monde. On y trouve par...

    Read the article

  • Should I be relying on WebTests for data validation?

    - by Alexander Kahoun
    I have a suite of web tests created for a web service. I use it for testing a particular input method that updates a SQL Database. The web service doesn't have a way to retrieve the data, that's not its purpose, only to update it. I have a validator that validates the response XML that the web service generates for each request. All that works fine. It was suggested by a teammate that I add data validation so that I check the database to see the data after the initial response validator runs and compare it with what was in the input request. We have a number of services and libraries that are separate from the web service I'm testing that I can use to get the data and compare it. The problem is that when I run the web test the data validation always fails even when the request succeeds. I've tried putting the thread to sleep between the response validation and the data validation but to no avail; It always gets the data from before the response validation. I can set a break point and visually see that the data has been updated in the DB, funny thing is when I step through it in debug with the breakpoint it does validate successfully. Before I get too much more into this issue I have to ask; Is this the purpose of web tests? Should I be able to validate data through service calls in this manner or am I asking too much of a web test and the response validation is as far as I should go?

    Read the article

< Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >