Search Results

Search found 59569 results on 2383 pages for 'data theory'.

Page 580/2383 | < Previous Page | 576 577 578 579 580 581 582 583 584 585 586 587  | Next Page >

  • Convenience of mySQL over xml

    - by Bonechilla
    Currently I use XML to store specific information to correctly load a few things such as a list of specfied characters, scenes and music, Once more I use JAXB in combination with standard compression/decompression(ZIP) functionality to store a list of extrenous data. This data is called to add functionality to the character, somewhat like Skills in an RPG. Each skill is seperated into its own XML file with a grandlist which contains the names of each file with their extensions omitted and zipped in folder that gets encrypted. At first using xml was working fine however as the skill list grow i worry about its stability. I was wondering if I should begin storing the data in mySQL. Originally I planned to simply convert everything to JSON over xml but i think possibly mySQL would be a better move. Can anyone inform me of the key difference and pros and cons of each I guess i'm looking for the best way to store the data more conviently and would be easier to operate on. The data is mostly primatives and strings and the only arraylist of values i have i can just concat into a single field and parse later Edit: If I am going in the right direction with XML would it make sense to convert it to JSON and use maybe Kyro or EclipseLink JAXB (MOXy)

    Read the article

  • ubuntu 13.04 - wireless settings help

    - by James Ellis
    Im having problems connecting my wifi in Ubuntu 13.04 So i was wondering if filling in the data manually ie: the IPv4, IPv6, the SSID and BSSID info etc. I did try this before but maybe i put in the wrong data or maybe not enough Would that make it work?? I just dont know how to find out some of the data you need to put in or if im putting the wrong stuff in??? Im new and its confusing. does anyone know the solution?

    Read the article

  • T-SQL User-Defined Functions: the good, the bad, and the ugly (part 4)

    - by Hugo Kornelis
    Scalar user-defined functions are bad for performance. I already showed that for T-SQL scalar user-defined functions without and with data access, and for most CLR scalar user-defined functions without data access , and in this blog post I will show that CLR scalar user-defined functions with data access fit into that picture. First attempt Sticking to my simplistic example of finding the triple of an integer value by reading it from a pre-populated lookup table and following the standard recommendations...(read more)

    Read the article

  • ubuntu 9.04 /var/www permissions

    - by luca
    ubuntu 9.04, user luca wants to access the /var/www directory. the directory is owned by user root, group root I changed the group ownership to www-data (sudo chgrp -R www-data /var/www/) and added write privileges to that group (sudo chmod -R g+r /var/www), and added luca to that group (sudo adduser luca www-data). Now, why can't luca still write to /var/www? It should be able to, right? in /etc/group we have: "www-data:x:33:luca" permissions for /var/www are: "drwxrwxr-x 2 root www-data 4096 Feb 26 15:35 www"

    Read the article

  • Creating a chained dropdownlist using AJAX and XML

    This article is about to create a chained drop down list when we need to represent data from hierarchical data sets. Here Ill be discussing the method to populate ASPX dropdown lists using partial page rendering with AJAX. My database is simple XML data file....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Creating a chained dropdownlist using AJAX and XML

    This article is about to create a chained drop down list when we need to represent data from hierarchical data sets. Here I’ll be discussing the method to populate ASPX dropdown lists using partial page rendering with AJAX. My database is simple XML data file.

    Read the article

  • Common mistakes when building a quality website

    A Google search on ?how to build a website? will give you hundreds of thousands websites with simple text and video tutorials, most tender rock-solid theory and ideas, but which teach you why and how... [Author: Johnny Hughes - Web Design and Development - April 07, 2010]

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves for October 20-26, 2013

    - by OTN ArchBeat
    What are the 4,460 fans of the OTN ArchBeat Facebook Page talking about? The list below represents the Top 10 most popular articles, blog posts, and other content from across the community. Enterprise Grade Deployment Considerations for Oracle Identity Manager AD Connector | Firdaus Fraz Oracle Fusion Middleware solution architect Firdaus Fraz illustrates provides best practice recommendations for setting up an enterprise deployment environment for the OIM connector for Microsoft Active Directory. A Roadmap for SOA Development and Delivery | Mark Nelson Do you know the way to S-O-A? Mark Nelson does. His latest blog post, part of an ongoing series, will help to keep you from getting lost along the way. The road ahead for WebLogic 12c | Edwin Biemond Oracle ACE Edwin Biemond shares his thoughts on announced new features in Oracle WebLogic 12.1.3 & 12.1.4 and compares those upcoming releases to Oracle WebLogic 12.1.2. Oracle GoldenGate 12c - New Release, New Features | Michael Rainey Rittman Mead's Michael Rainey takes you on guided tour through the GoldenGate 12c features that "are relevant to data warehouse and data migration work we typically see in the business intelligence world." Reproducing WebLogic Stuck Threads with ADF CreateInsert Operation and ORDER BY Clause | Andrejus Baranovsikis Another post from Oracle ACE Director Andrejus Baranovsikis on dealing with WebLogic Stuck Threads. This one includes a test case application you can download. The Impact of SaaS - The Times They Are A-Changin' | Floyd Teter Oracle ACE Director Floyd Teter shares some truly interesting insight gained in conversations with three Fortune 500 CIOs. Configure Oracle Identity Manager AD/LDAP Authentication | Arda Eralp A step-by-step how-to from a member of the Fusion Middleware Applications Consultancy team. Java-Powered Robot Named NAO Wows Crowds | Tori Wieldt Tori Wieldt interviews a robot and human. Updated ODI Statement of Direction | Robert Schweighardt Heads up Oracle Data Integrator fans! A new product statement of direction document is available, offering "an overview of the strategic product plans for Oracle’s data integration products for bulk data movement and transformation, specifically Oracle Data Integrator (ODI) and Oracle Warehouse Builder (OWB)." Oracle BI Apps 11.1.1.7.1 – GoldenGate Integration - Part 2: Setup and Configuration | Michael Rainey Michael Rainey continues his series with another technical article for you GoldenGate fans. Thought for the Day "Intuition will tell the thinking mind where to look next." — Jonas Salk, American medical researcher and virologist (October 28, 1914 – June 23, 1995) Source: brainyquote.com

    Read the article

  • Connectify Dispatch: Link All Your Network Connections into a Super Pipeline

    - by Jason Fitzpatrick
    Connectify Dispatch is a network management tool that takes all the connections around you–Ethernet, Wi-Fi nodes, even 3G/4G cellular connections–and combines them into one giant data pipeline. At its most simple, Connectify Dispatch takes all the network inputs available to your computer (be those connections hard-line Ethernet, Wi-Fi nodes, or cellular connections) and merges the separate data connections seamlessly into one master connection. If any of the connections should falter (like your 3G reception goes out), Connectify automatically shifts the data to other available networks without any interruption. In addition you can specify which network Connectify should favor with connection prioritization; perfect for using your cellular connection without breaking through your data cap for the month right away. Hit up the link below to read more about Connectify Dispatch and the companion app Connectify Hotspot. Connectify Dispatch Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • beginner - best way to do a 'Confirm' page? [closed]

    - by W_P
    I am a beginning web app developer, wondering about the best way to implement a "Confirm Page" upon form submission. I have heard that it's best for the script that a form POSTs to to be implemented by handling the POST data and then redirecting to another page, so the user isn't directly viewing the page that was POSTed to. My question is about the best way to implement a "Confirm before data save" page. Do I Have my form POST to a script, which marshals the data, puts in a GET, and redirects to the confirm page, which unmarshals and displays the data in another form, where the user can then either confirm (which causes another POST to a script that actually saves the data) or deny (which causes the user to be redirected back to the original form, with their input added)? Have my form POST directly to the confirm page, which is displayed to the user and then, like #1, gives the user the option to confirm or deny? Have my form GET the confirm page, which then does the expected behavior? I feel like there is a common-sense answer to this question that I am just not getting.

    Read the article

  • Node.JS testing with Jasmine, databases, and pre-existing code

    - by Jim Rubenstein
    I've recently built the start of a core system which is likely going turn into a monster product. I'm building the system with node.js, and decided after I got a small base built, that It'd be a great idea to start using some sort of automated test suite to test the application. I decided to use jasmine, as it seems pretty solid and has a lot of features for stubbing spying and mocking methods and classes. The application has a lot of external data stores and api access (kestrel, mysql, mongodb, facebook, and more). My issue is, I've got a good amount of code written that I want to start testing - as it represents the underpinnings of the application. What are the best practices for testing methods/classes that access external APIs that I may or may not have control over? As an example, I have a data structure that fetches a bunch of data from a MySQL database. I want to test the method that retrieves the data; and I'm not sure how to go about it. I could test the fetch method which is supposed to return an array of objects, but to isolate the method from the database, I need to define my own fixture data. So what I end up doing is stubbing the mysql execution, and returning a static dataset. So, I end up writing a function that returns the dataset that makes my test pass. That doesn't seem to actually test the code, other than verifying a method is being called. I know this is kind of abstract and vague, it seems that the idea of testing is very much abstract though, so hopefully someone has some experience and can guide me in the right direction. Any advice, or reading I can do is more than welcomed. Thanks in advance.

    Read the article

  • What information must never appear in logs?

    - by MainMa
    I'm about to write the company guidelines about what must never appear in logs (trace of an application). In fact, some developers try to include as many information as possible in trace, making it risky to store those logs, and extremely dangerous to submit them, especially when the customer doesn't know this information is stored, because she never cared about this and never read documentation and/or warning messages. For example, when dealing with files, some developers are tempted to trace the names of the files. For example before appending file name to a directory, if we trace everything on error, it will be easy to notice for example that the appended name is too long, and that the bug in the code was to forget to check for the length of the concatenated string. It is helpful, but this is sensitive data, and must never appear in logs. In the same way: Passwords, IP addresses and network information (MAC address, host name, etc.)¹, Database accesses, Direct input from user and stored business data must never appear in trace. So what other types of information must be banished from the logs? Are there any guidelines already written which I can use? ¹ Obviously, I'm not talking about things as IIS or Apache logs. What I'm talking about is the sort of information which is collected with the only intent to debug the application itself, not to trace the activity of untrusted entities. Edit: Thank you for your answers and your comments. Since my question is not too precise, I'll try to answer the questions asked in the comments: What I'm doing with the logs? The logs of the application may be stored in memory, which means either in plain on hard disk on localhost, in a database, again in plain, or in Windows Events. In every case, the concern is that those sources may not be safe enough. For example, when a customer runs an application and this application stores logs in plain text file in temp directory, anybody who has a physical access to the PC can read those logs. The logs of the application may also be sent through internet. For example, if a customer has an issue with an application, we can ask her to run this application in full-trace mode and to send us the log file. Also, some application may sent automatically the crash report to us (and even if there are warnings about sensitive data, in most cases customers don't read them). Am I talking about specific fields? No. I'm working on general business applications only, so the only sensitive data is business data. There is nothing related to health or other fields covered by specific regulations. But thank you to talk about that, I probably should take a look about those fields for some clues about what I can include in guidelines. Isn't it easier to encrypt the data? No. It would make every application much more difficult, especially if we want to use C# diagnostics and TraceSource. It would also require to manage authorizations, which is not the easiest think to do. Finally, if we are talking about the logs submitted to us from a customer, we must be able to read the logs, but without having access to sensitive data. So technically, it's easier to never include sensitive information in logs at all and to never care about how and where those logs are stored.

    Read the article

  • Library to fake intermittent failures according to tester-defined policy?

    - by crosstalk
    I'm looking for a library that I can use to help mock a program component that works only intermittently - usually, it works fine, but sometimes it fails. For example, suppose I need to read data from a file, and my program has to avoid crashing or hanging when a read fails due to a disk head crash. I'd like to model that by having a mock data reader function that returns mock data 90% of the time, but hangs or returns garbage otherwise. Or, if I'm stress-testing my full program, I could turn on debugging code in my real data reader module to make it return real data 90% of the time and hang otherwise. Now, obviously, in this particular example I could just code up my mock manually to test against a random() routine. However, I was looking for a system that allows implementing any failure policy I want, including: Fail randomly 10% of the time Succeed 10 times, fail 4 times, repeat Fail semi-randomly, such that one failure tends to be followed by a burst of more failures Any policy the tester wants to define Furthermore, I'd like to be able to change the failure policy at runtime, using either code internal to the program under test, or external knobs or switches (though the latter can be implemented with the former). In pig-Java, I'd envision a FailureFaker interface like so: interface FailureFaker { /** Return true if and only if the mocked operation succeeded. Implementors should override this method with versions consistent with their failure policy. */ public boolean attempt(); } And each failure policy would be a class implementing FailureFaker; for example there would be a PatternFailureFaker that would succeed N times, then fail M times, then repeat, and a AlwaysFailFailureFaker that I'd use temporarily when I need to simulate, say, someone removing the external hard drive my data was on. The policy could then be used (and changed) in my mock object code like so: class MyMockComponent { FailureFaker faker; public void doSomething() { if (faker.attempt()) { // ... } else { throw new RuntimeException(); } } void setFailurePolicy (FailureFaker policy) { this.faker = policy; } } Now, this seems like something that would be part of a mocking library, so I wouldn't be surprised if it's been done before. (In fact, I got the idea from Steve Maguire's Writing Solid Code, where he discusses this exact idea on pages 228-231, saying that such facilities were common in Microsoft code of that early-90's era.) However, I'm only familiar with EasyMock and jMockit for Java, and neither AFAIK have this function, or something similar with different syntax. Hence, the question: Do such libraries as I've described above exist? If they do, where have you found them useful? If you haven't found them useful, why not?

    Read the article

  • XAML RadControls Q1 2010 Official

    Q1 2010 release focuses on strengthening 3 main aspects of RadControls for Silverlight and RadControls for WPF: Ensuring first-class performance for all data-centric controls through various techniques, Enhancing and polishing RadControls themes Providing highly advanced, enterprise-level features, especially for the data visualization controls  We know that performance is crucial for line-of-business applications. Therefore, we always make sure that RadControls can help you achieve unmatched performance and this has always been our number one priority. RadControls achieve unbeatable performance through UI and Data Virtualization, Data Sampling and built-in Load On Demand features. Several of the major controls in the bundles have been enhanced with UI virtualization support Scheduler, CoverFlow and Book. As a part of Q1 2010 we also want to bring an unparalleled visual richness to your applications. To achieve that we have done a major rework of all our themes. We used a uniform templating approach across all controls, streamlined naming conventions for resources and delivered a much more consistent look of the controls along the way. RadControls for WPF bundle has been enriched with two new controls Map and Book.   Another new control has been included in the Q1 2010 release. However, it continues to be in a CTP stage. This is the Transition control. We decided that this is the better way to proceed as we will need some more input from our community on how exactly to develop this control further. Therefore, we will be regularly blogging on the development progress so that we can clearly indicate the direction, in which the control is evolving and gather your feedback on whether this is the best direction. Our Charting controls for Silverilght and WPF have been advanced with major new features such as Data Sampling, Zooming and Scrolling, Automatic SmartLables positioning, Sorting and Filtering and many more. The new built-in paging of the GridView control now allows you to page through your data, thus resulting in an event faster and more responsive grid that can easily handle enormously large datasets. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The technology behind 22can's curiosity

    - by Cameron Scully
    I don't have alot of experience with mobile apps and I definitely don't know much about MMO's but I was wondering what the basic architecture of a game like that would be (understandably some don't consider it a game, but it must use some game theory and implementation). Mainly, how are they able to send/recieve real time feed back of the cube being chipped away by thousands of players on their mobile devices? How is data of the cube's millions of pieces stored and accessed so quickly? Thanks

    Read the article

  • bash script to login to webpage

    - by Nathan Cazell
    I am trying to login into this page but I cannot for the life of me get it to work. I have to login to this site when i connect to my school's wifi in order to start a session. So far ive tried to use bash and cUrl to achieve this but have only achieved to give myself a headache. will cUrl work or am I on the wrong track? Any help is greatly appreciated! Thanks, N Here's what i tried: curl --cookie-jar cjar --output /dev/null http://campus.fsu.edu/webapps/login/ curl --cookie cjar --cookie-jar cjar \ --data 'username=foo' \ --data 'password=bar' \ --data 'service=http://campus.fsu.edu/webapps/login/' \ --data 'loginurl=http://campus.fsu.edu/webapps/login/bb_bb60/logincas.jsp' \ --location \ --output ~/loginresult.html \ http://campus.fsu.edu/webapps/login/

    Read the article

  • How to securely delete files stored on a SSD?

    - by Chris Neuroth
    From a (very long, but definitely worth to read) article on SSDs: When you delete a file in your OS, there is no reaction from either a hard drive or SSD. It isn’t until you overwrite the sector (on a hard drive) or page (on a SSD) that you actually lose the data. File recovery programs use this property to their advantage and that’s how they help you recover deleted files. The key distinction between HDDs and SSDs however is what happens when you overwrite a file. While a HDD can simply write the new data to the same sector, a SSD will allocate a new (or previously used) page for the overwritten data. The page that contains the now invalid data will simply be marked as invalid and at some point it’ll get erased. So, what would be the best way to securely erase files stored on a SSD? Overwriting with random data as we are used to from hard disks (e.g. using the "shred" utility) won't work unless you overwrite the WHOLE drive...

    Read the article

  • Software licensing and code generation

    - by Nicol Bolas
    I'm developing a tool that generates code from some various data. The tool itself will be licensed with the MIT license, which strikes a good balance for me in terms of allowing the freedom to use and modify it, while still holding the copyright. OK, but what is the legal status of the code generated by the tool? Who holds the copyright for code generated by a tool? Do I need to give users of the tool a license for the generated code, or do they already have that by virtue of it being generated by them? What is different about this code generation system (which may be relevant) is that the source information about the code generation is provided by the system itself. The user doesn't feed source data in; the source data is bundled along with it. They simply have the means to transform it in various ways (filtering out parts of the data they don't want, etc). Obviously they could edit the bundled data. Does that affect anything about this?

    Read the article

  • Authentication system brainstorm

    - by gansbrest
    Hi. We got multiple small websites (microsites) and one main high traffic one with big users base. Right now the requirement is to build authentication system which should allow users to loign with the same identity across the network. All website are running on different domains, powered by Drupal 6 CMS and have separate databases (so sharing tables with prefix is not an option + it creates a huge mess in the db). Here is the set of core requirements I came up with: Users should be able to login with the same credentials to all sites within the network User’s data sharing between Main site (storage) and all micro sites within the network Data synchronization across the network when user changes the data (update email or password for example) The login/registration process should be seamless and consistent Register on any of the sites across the network and use that identity to login later on. In the future there might be a need to add openid authentication options. Basically we are looking at something similar stackexchange does, but not sure if they have central users base on not. I was thinking about custom solution which will include 2 parts (modules), one will be stored on the Main site for users data storing and responding to requests from clients. Second part (module) will be placed on each microsite, which is going to send requests to the Master. Some kind of client - server setup. One of the complications I see right away is #3. Data Synhcronization across the network. I just don't want to reinvent the wheel and maybe some work is already done in this direction. Looking forward to your ideas on how to approach this project. EDIT: We use MySQL database

    Read the article

  • how to save a gtktextbuffer content in file

    - by user1565593
    i tried to save sengtktextbuffer content in a file. my code seens working but i have a problem in file. some characters are unreadable in outfile outfile my code: def on_save_clicked(self, widget, data=None): start = self.textbuffer.get_start_iter() end = self.textbuffer.get_end_iter() this = self.textbuffer.get_text(start, end, False) format = self.textbuffer.register_serialize_tagset(this) data = self.textbuffer.serialize(self.textbuffer, format, start, end) outfile = open("/home/christophe/toto.txt", "w") outfile.write(data) outfile.close() what is wrong in my code? thanks for your help

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Payback Is The Coupon King

    - by Troy Kitch
    PAYBACK GmbH operates the largest marketing and couponing platforms in the world—with more than 50 million subscribers in Germany, Poland, India, Italy, and Mexico.  The Security Challenge Payback handles millions of requests for customer loyalty coupons and card-related transactions per day under tight latency constraints—with up to 1,000 attributes or more for each PAYBACK subscriber. Among the many challenges they solved using Oracle, they had to ensure that storage of sensitive data complied with the company’s stringent privacy standards aimed at protecting customer and purchase information from unintended disclosure. Oracle Advanced Security The company deployed Oracle Advanced Security to achieve reliable, cost-effective data protection for back-up files and gain the ability to transparently encrypt data transfers. By using Oracle Advanced Security, organizations can comply with privacy and regulatory mandates that require encrypting and redacting (display masking) application data, such as credit cards, social security numbers, or personally identifiable information (PII). Learn more about how PAYBACK uses Oracle.

    Read the article

  • Will a polled event system cause lag for a server?

    - by Milo
    I'm using a library called ENet. It is a reliable UDP library. The way it works is a polled event system like this: ENetEvent event; /* Wait up to 1000 milliseconds for an event. */ while (enet_host_service (client, & event, 1000) > 0) { switch (event.type) { case ENET_EVENT_TYPE_CONNECT: printf ("A new client connected from %x:%u.\n", event.peer -> address.host, event.peer -> address.port); /* Store any relevant client information here. */ event.peer -> data = "Client information"; break; case ENET_EVENT_TYPE_RECEIVE: printf ("A packet of length %u containing %s was received from %s on channel %u.\n", event.packet -> dataLength, event.packet -> data, event.peer -> data, event.channelID); /* Clean up the packet now that we're done using it. */ enet_packet_destroy (event.packet); break; case ENET_EVENT_TYPE_DISCONNECT: printf ("%s disconected.\n", event.peer -> data); /* Reset the peer's client information. */ event.peer -> data = NULL; } } It waits up to 1000 milliseconds for an event. If I'm hosting say 75 event driven card games and a lobby on the same thread as this code, will it cause any problems. If my understanding is correct, the process will simply sleep until there is an event, when there is one, it will process the event then come back here where potentially 5 or so events have queued up since so enet_host_services would return right away and not cause lag. I have been advised not to use multiple threads, will that be alright like this? Thanks

    Read the article

  • Google I/O 2010 - The SketchUp 3D API

    Google I/O 2010 - The SketchUp 3D API Google I/O 2010 - The SketchUp 3D API: Working with 3D geospatial data Geo 201 Matt Lowrie The world is a three dimensional space. Your geospatial applications should be showing it that way. This session will show how to create 3D data in Building Maker and then use the SketchUp API to customize that data to fit your needs. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 17 0 ratings Time: 58:28 More in Science & Technology

    Read the article

< Previous Page | 576 577 578 579 580 581 582 583 584 585 586 587  | Next Page >