Search Results

Search found 58653 results on 2347 pages for 'seed data'.

Page 862/2347 | < Previous Page | 858 859 860 861 862 863 864 865 866 867 868 869  | Next Page >

  • Analyzing I/O Characteristics and Sizing Storage Systems for SQL Server Database Applications

    Understanding how to analyze the characteristics of I/O patterns in the Microsoft® SQL Server® data management software and how they relate to a physical storage configuration is useful in determining deployment requirements for any given workload. A well-performing I/O subsystem is a critical component of any SQL Server application. I/O subsystems should be sized in the same manner as other hardware components such as memory and CPU. As workloads increase it is common to increase the number of CPUs and increase the amount of memory. Increasing disk resources is often necessary to achieve the right performance, even if there is already enough capacity to hold the data. Too many SQL Servers to keep up with?Download a free trial of SQL Response to monitor your SQL Servers in just one intuitive interface."The monitoringin SQL Response is excellent." Mike Towery.

    Read the article

  • Greepeace évalue l'impact du Cloud Computing sur l'environnement : Google et Yahoo félicités, Apple

    Greenpeace évalue l'impact du Cloud Computing sur l'environnement L'association félicite Google et Yahoo et rappelle Apple et Facebook à l'ordre GreenPeace vient de sortir son traditionnel rapport sur l'impact écologiques des nouvelles technologies. Cette fois-ci l'association militante s'est particulièrement intéressée au Cloud Computing. La consommation électrique des data-centers jugée trop élevée est montrée du doigt. Chiffres à l'appui, Greenpeace croit savoir que dans 10 ans, les infrastructures liées à "l'informatique dans les nuages" (réseaux, et data-centers donc) consommeront autant que l'Allemagne, le Brésil, le Canada et la France réunis. L'associatio...

    Read the article

  • Is server validation necessary with client-side validators?

    - by peroija
    I recently created a .net web app that used over 200 custom validators on one page. I wrote code for both ClientValidationFunction and OnServerValidate which results in a ton of repetitive code. My sql statements are parameterized, I have functions that pull data from input fields and validates them before passing to the sql statements or stored procedures. And the javascript validates the fields before the page submits. So essentially the data is clean and valid before it even hits the OnServerValidate and clean after it anyways due to the aforementioned steps. This makes me question, is OnServerValidate really needed when I validate on the clientside?

    Read the article

  • Ubuntu 12.04 x64 doesn't boot anymore after a power failure

    - by Felix
    I'm a Windows user and I have no experience with Linux and Ubuntu. I installed Ubuntu 12.04 on my netbook (Asus 1215B) and everything works fine. Yesterday I ran the "update application" and updated over 120 "things" (I have no idea what exactly). After that I was asked to reboot, and I did. Ubuntu starts again and at the load screen with these 5 dots that normally begin to change color, it freezes. After 20 minutes I took out the battery to try another reboot (yes, not the the best idea), and now nothing happens. I boot from the HDD and I get an Error BOOTMGR is missing. I have important data on the hard drive. Is there an option to get this fixed? Or if not, to at least get the data from the hard drive?

    Read the article

  • How should I define my Java Objects?

    - by HonorGod
    I have a data grid where I sort of show the following information - All Guests Total Adults = 22 Total Children = 27 Confirmed Total Adults = 9 Total Children = 13 Country = Germany Total Adults = 5 Total Childres = 6 Friends Adults = 2 Children = 2 Relatives Adults = 3 Children = 4 Country = USA Total Adults = 4 Total Childres = 7 Friends Adults = 2 Children = 5 Relatives Adults = 2 Children = 2 Tentative Total Adults = 13 Total Children - 14 Country = Australia Total Adults = 7 Total Childres = 8 Friends Adults = 2 Children = 3 Relatives Adults = 5 Children = 5 Country = China Total Adults = 6 Total Childres = 6 Friends Adults = 2 Children = 4 Relatives Adults = 4 Children = 2 And in the database what I have is data at the lowest level which is Friends / Relatives and the corresponding countries set as a look-up value which in indirectly connected to another look-up that can tell me if they fall under confirmed or tentative. I guess my question is how do I layout my Java Object and perform the aggregations and give it back to the client. I am not sure if I am clear with my question, but feel free to comment so I can update the question accordingly.

    Read the article

  • usb ntfs pen drive and hard drive not working after connection to ubuntu

    - by lemon619
    as said in the title first of all i connected a usb key (ntfs formatted) with data in it, on ubuntu 12.04, after i disconnected it i was no more able to access the usb key neither from windows nor from ubuntu 12.04. The system asks me to format my usb key Just now i connected a hard drive ntfs formatted with data in it to ubuntu 12.04. I ejected correctly the hard drive and connected it to windows, and the system asks me to format my hard drive? same as the usb key Can someone help me please ? Would really appreciate!

    Read the article

  • How can I design multi-threaded application for larger user base

    - by rokonoid
    Here's my scenario, I need to develop a scalable application. My user base may be over 100K, every user has 10 or more small tasks. Tasks check every minute with a third party application through web services, retrieving data which is written in the database, then the data is forwarded to it's original destination. So here's the processing of a small task: while(true){ Boolean isNewInformationAvailable = checkWhetherInformationIsAvailableOrNot(); If(isNewInformationAvailable ==true){ fetchTheData(); writeToDatabase(); findTheDestination(); deliverTheData(); isAvailable =false; } } As the application is large, how should I approach designing this? I'm going to use Java to write it. Should I use concurrency, and how would you manage the concurrency?

    Read the article

  • Tracking scroll depth to reveal content engagement in Google Analytics

    This article investigates a way to track content engagement on your site. By monitoring how far down the page a visitor to your site travels and then recording the data in Google Analytics you can discover how many of your visitors are reading your content all the way to the end. Introduction When browsing through your Google Analytics reports you will find a massive selection of data at your fingertips. You can find how many people visited a page, how long they were there, what country they were...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Application workflow

    - by manseuk
    I am in the planning process for a new application, the application will be written in PHP (using the Symfony 2 framework) but I'm not sure how relevant that is. The application will be browser based, although there will eventually be API access for other systems to interact with the data stored within the application, again probably not relavent at this point. The application manages SIM cards for lots of different providers - each SIM card belongs to a single provider but a single customer might have many SIM cards across many providers. The application allows the user to perform actions against the SIM card - for example Activate it, Barr it, Check on its status etc Some of the providers provide an API for doing this - so a single access point with multiple methods eg activateSIM, getStatus, barrSIM etc. The method names differ for each provider and some providers offer methods for extra functions that others don't. Some providers don't have APIs but do offer these methods by sending emails with attachments - the attachments are normally a CSV file that contains the SIM reference and action required - the email is processed by the provider and replied to once the action has been complete. To give you an example - the front end of my application will provide a customer with a list of SIM cards they own and give them access to the actions that are provided by the provider of each specific SIM card - some methods may require extra data which will either be stored in the backend or collected from the user frontend. Once the user has selected their action and added any required data I will handle the process in the backend and provide either instant feedback, in the case of the providers with APIs, or start the process off by sending an email and waiting for its reply before processing it and updating the backend so that next time the user checks the SIM card its status is correct (ie updated by a backend process). My reason for creating this question is because I'm stuck !! I'm confused about how to approach the actual workflow logic. I was thinking about creating a Provider Interface with the most common methods getStatus, activateSIM and barrSIM and then implementing that interface for each provider. So class Provider1 implements Provider - Then use a Factory to create the required class depending on user selected SIM card and invoking the method selected. This would work fine if all providers offered the same methods but they don't - there are a subset which are common but some providers offer extra methods - how can I implement that flexibly ? How can I deal with the processes where the workflow is different - ie some methods require and API call and value returned and some require an email to be sent and the next stage of the process doesn't start until the email reply is recieved ... Please help ! (I hope this is a readable question and that this is the correct place to be asking) Update I guess what I'm trying to avoid is a big if or switch / case statement - some design pattern that gives me a flexible approach to implementing this kind of fluid workflow .. anyone ?

    Read the article

  • Are we queueing and serializing properly?

    - by insta
    We process messages through a variety of services (one message will touch probably 9 services before it's done, each doing a specific IO-related function). Right now we have a combination of the worst-case (XML data contract serialization) and best-case (in-memory MSMQ) for performance. The nature of the message means that our serialized data ends up about 12-15 kilobytes, and we process about 4 million messages per week. Persistent messages in MSMQ were too slow for us, and as the data grows we are feeling the pressure from MSMQ's memory-mapped files. The server is at 16GB of memory usage and growing, just for queueing. Performance also suffers when the memory usage is high, as the machine starts swapping. We're already doing the MSMQ self-cleanup behavior. I feel like there's a part we're doing wrong here. I tried using RavenDB to persist the messages and just queueing an identifier, but the performance there was very slow (1000 messages per minute, at best). I'm not sure if that's a result of using the development version or what, but we definitely need a higher throughput[1]. The concept worked very well in theory but performance was not up to the task. The usage pattern has one service acting as a router, which does all reads. The other services will attach information based on their 3rd party hook, and forward back to the router. Most objects are touched 9-12 times, although about 10% are forced to loop around in this system for awhile until the 3rd parties respond appropriately. The services right now account for this and have appropriate sleeping behaviors, as we utilize the priority field of the message for this reason. So, my question, is what is an ideal stack for message passing between discrete-but-LAN'ed machines in a C#/Windows environment? I would normally start with BinaryFormatter instead of XML serialization, but that's a rabbit hole if a better way is to offload serialization to a document store. Hence, my question. [1]: The nature of our business means the sooner we process messages, the more money we make. We've empirically proven that processing a message later in the week means we are less likely to make that money. While performance of "1000 per minute" sounds plenty fast, we really need that number upwards of 10k/minute. Just because I'm giving numbers in messages per week doesn't mean we have a whole week to process those messages.

    Read the article

  • Is there a pattern to restrict which classes can update another class?

    - by Mike
    Say I have a class ImportantInfo with a public writable property Data. Many classes will read this property but only a few will ever set it. Basically, if you want to update Data you should really know what you're doing. Is there a pattern I could use to make this explicit other than by documenting it? For example, some way to enforce that only classes that implement IUpdateImportantData can do it (this is just an example)? I'm not talking about security here, but more of a "hey, are you sure you want to do that?" kind of thing.

    Read the article

  • Estimating file transfer time over network?

    - by rocko
    I am transferring file from one server to another. So, to estimate the time it would take to transfer some GB's of file over the network, I am pinging to that IP and taking the average time. For ex: i ping to 172.26.26.36 I get the average round trip time to be x ms, since ping send 32 bytes of data each time. I estimate speed of network to be 2*32*8(bits)/x = y Mbps -- multiplication with 2 because its average round trip time. So transferring 5GB of data will take 5000/y seconds Am I correct in my method of estimating time. If you find any mistake or any other good method please share.

    Read the article

  • Need suggestions on what you regard as &ldquo;security&rdquo;

    - by John Breakwell
    I’m currently writing a large piece on MSMQ security and wanted to check I was covering the right areas. I have some doubts as I’ve seen the occasional MSMQ forum question where a poster has used the word “security” in different contexts to what I was expecting. So here are the areas I plan to cover: Message security encryption on the wire (SSL and IPSEC) encryption of the message (MSMQ encryption) encryption of the payload (data encryption) signing and authentication Queue security SIDs and ACLs Discoverability Cross-forest issues Storage security NTFS permissions unencrypted data Service security Ports and Firewalls DOS attacks Hardened mode (HTTP only) RPC secure channel requirement authenticated RPC requirement Active Directory object permissions Setup Administrator requirements What else would you want to see?

    Read the article

  • Perform Better with Microsoft Project 2007 and Visio 2007. Now 20% OFF!

    To increase productivity, it is essential that your employees have easy access to productivity tools that make the best use of their resources. Microsoft Office Project 2007 and Microsoft Visio 2007 now work Better Together by: · Reducing repetitive tasks with diagrams that refresh automatically with Data Refresh and Data Connect · Tracking the source of issues quickly with the Track Drivers feature · Tracking budgets with the new Costs Resources field · Building ready-to-use reports with the Visual reports engine · Share and manage documents on collaborative workspaces with Windows SharePoint Services For a limited time, you can now license Microsoft Office Project 2007 system and Microsoft Office Visio 2007 system under the No Better Time offer at a 20% discount for Open Value and a 15% discount for Desktop SKUs. Hurry, there’s No Better Time for you to buy Microsoft Office Project 2007 and Microsoft Office Visio 2007. Click here to view the Terms and Conditions. span.fullpost {display:none;}

    Read the article

  • How can I use Ubuntu to rescue files from an NTFS drive?

    - by dago
    I reinstalled windows on the laptop of my daughter. Before doing this we made a copy of her music, pictures and so on. Unfortunately, the folder we created for the backup was empty (I checked after I reinstalled windows). Today a miracle happened, as suddenly the data were back but when Itried to copy them to another place thez disappeared again. Checking the drive with disk utility I got the message "filesystem is not clean". Is there a way to recover the data? Preferred solution would be a tool with a gui. The drive I need to repair is formatted in NTFS and is connected via usb to laptop.

    Read the article

  • IPC in C under linux

    - by poly
    I'm building a messaging solution with the followingsetup: all the messages are saved on a DB, two or more reader processes will read from this DB and send data to other process(es) which will send it over the network. My approach is depicted below, The following have 4 sender process with 4 fifos, and 2 readers with 2 fifos reader0 ? read data from DB reader1 ? read data from DB sending part network_handler0 ? network_handler_fifo0 ? reader0 network_handler1 ? network_handler_fifo1 ? reader1 network_handler2 ? network_handler_fifo2 ? reader0 network_handler3 ? network_handler_fifo3 ? reader1 receiving part network_handler0 ? reader_fifo0 ? reader0 ? write to DB network_handler1 ? reader_fifo1 ? reader1 ? write to DB network_handler2 ? reader_fifo0 ? reader0 ? write to DB network_handler3 ? reader_fifo1 ? reader1 ? write to DB I have few problem with this setup, and please note that the number of processes could be more than that based on the environment, so I could make it 20 readers and 10 network_handlers or it it could as shown above. The size of the buffer is 64K and the message size is 200k, is this small enough to make the write/read to/from fifo atomic? How can make the processes aware of each other, so for example, reader 0 writes to network_handler_fifo0 and network_handler_fifo2, how can I make it start writing on other fifo if the current ones are full or their network_handlers are dea d I thought about making the reader process writing more general in writing, so for example it writes to all network fifos using lock mechanism and stop writing on the one that its process dead, I didn't use it as lock mechanism could slow thing down. BTW, each network_handler is an SCTP association, so network_handler0 is association 0, network_handler1 is association 1 and so on. Any idea is appreciated. I mean even if I have to change the setup above.

    Read the article

  • "Collection Wrapper" pattern - is this common?

    - by Prog
    A different question of mine had to do with encapsulating member data structures inside classes. In order to understand this question better please read that question and look at the approach discussed. One of the guys who answered that question said that the approach is good, but if I understood him correctly - he said that there should be a class existing just for the purpose of wrapping the collection, instead of an ordinary class offering a number of public methods just to access the member collection. For example, instead of this: class SomeClass{ // downright exposing the concrete collection. Things[] someCollection; // other stuff omitted Thing[] getCollection(){return someCollection;} } Or this: class SomeClass{ // encapsulating the collection, but inflating the class' public interface. Thing[] someCollection; // class functionality omitted. public Thing getThing(int index){ return someCollection[index]; } public int getSize(){ return someCollection.length; } public void setThing(int index, Thing thing){ someCollection[index] = thing; } public void removeThing(int index){ someCollection[index] = null; } } We'll have this: // encapsulating the collection - in a different class, dedicated to this. class SomeClass{ CollectionWrapper someCollection; CollectionWrapper getCollection(){return someCollection;} } class CollectionWrapper{ Thing[] someCollection; public Thing getThing(int index){ return someCollection[index]; } public int getSize(){ return someCollection.length; } public void setThing(int index, Thing thing){ someCollection[index] = thing; } public void removeThing(int index){ someCollection[index] = null; } } This way, the inner data structure in SomeClass can change without affecting client code, and without forcing SomeClass to offer a lot of public methods just to access the inner collection. CollectionWrapper does this instead. E.g. if the collection changes from an array to a List, the internal implementation of CollectionWrapper changes, but client code stays the same. Also, the CollectionWrapper can hide certain things from the client code - from example, it can disallow mutation to the collection by not having the methods setThing and removeThing. This approach to decoupling client code from the concrete data structure seems IMHO pretty good. Is this approach common? What are it's downfalls? Is this used in practice?

    Read the article

  • database independent coding framework options?

    - by statirasystems
    Background: I have not programmed in a while besides doing VBA and a little VB.NET. So please forgive my language use. I'm green and have a head cold. I am reading all I can now, but I have no programming circles to draw from. The information I am providing is to help guide you to what I am looking for. I am not confident I can ask the question properly. Story: I have four different projects that I am starting. Obviously I won't be working on all at the same time however they each will have similar needs and be inter related. They are as follows: Desktop Environment/System User Interface - basically a product that runs on major computers via mono or .net that unifies the look and functions. In the context of the up coming question it would be able to directly access data of various types. It would work in tandum with my office suite, system manager, and network application framework. Office Suite - technically it would not be a suite since I will be doing it from one interfacel except for the Communications Application. As far as the question, it will need to be able to link to various data sources for storing files and using, manipulating, and presenting information. System Manager - an intellegent system to manage and administer the entire network and all equipment. As far as the question, needs to be able to access data for archiving and and for accessing it's own settings stored in various formats, sql or xml. Network Application Framework - A complete system that can be used for ERP, CRM, CMS, Errata, File Management, and so on. As to the question to be able to access it's own or interlink with existing applications. Requirement: C#, Simplifies and reduces coding, use the same code to access diffent databases(ie MySQL, MS SQL, ACCESS, XML, ...), Mono would be nice but not a must, Question: What librarys, frameworks, or other options would be able to help with this? Is there a good resource to guide me? I don't want arguing over what is best, just information to help me further understand and make an educated decision.

    Read the article

  • Why I am getting following error when trying to start SDK manager?

    - by rishiag
    I have a 64 bit- 20 GB Ubuntu partition which has very less usable space. So I have put Eclipse in my Ubuntu and downloaded sdks to a folder in my another partition. So when I try to start sdk manager, I am getting the following error in my console: Unexpected exception 'Cannot run program "/media/Data/android-sdks/platform-tools/adb": error=13, Permission denied' while attempting to get adb version from '/media/Data/android-sdks/platform-tools/adb' I have run chmod recursively on android-sdks directory. If I change the address for sdks to Ubuntu partition, sdk manager starts successfully. Is there anything I can do other than increasing the partition size? Thanks

    Read the article

  • SQL Saturday #248 Tampa

    SQLSaturday BI & Big Data Edition is a free training event for everyone interested in learning about Business Intelligence & Big Data with a focus in the Microsoft SQL Server platform. This event will be held November 9th, 2013. In addition to our Saturday free event, we will also host four paid full day pre-conferences. Want faster, smaller backups you can rely on? Use SQL Backup Pro for up to 95% compression, faster file transfer and integrated DBCC CHECKDB. Download a free trial now.

    Read the article

  • Initiating processing in a RESTful manner

    - by tom
    Let's say you have a resource that you can do normal PUT/POST/GET operations on. It represents a BLOB of data and the methods retrieve representations of the data, be they metadata about the BLOB or the BLOB itself. The resource is something that can be processed by the server on request. In this instance a file that can be parsed multiple times. How do I initiate that processing? It's a bit RPC like. Is there best practice around this? (First time on programmers. This is the right place for this sort of question, right?)

    Read the article

  • FTP Changes file permisions

    - by AkBKukU
    I am trying to make changes to my website over ftp but when I save any files it changes the permissions and owner to my username. I can edit files in the same folder trough samba without changing the permissions. I really don't understand how the permissions for the /var/www folder work and I'm pulling my hair out trying to get it to work. I have recently made changes to the permissions of the /var/www directory (following this guide) so I could modify files in the www root. Right now I have the entire contents of /var/www set as -rwxrwxr-x 1 www-data www-data but when I change a file over ftp it becomes -rw------- 1 akbkuku akbkuku, akbkuku being my username. I am using vsftpd as the server, and I login with my normal user. How do I make it leave to permissions alone? At this point I'll even take a way yo reset all the permission back to stock and I'll just never modify files in the web root.

    Read the article

  • Should I use my own public API on my site (via JS)?

    - by newboyhun
    First of all, this question is far more different other 'public api questions' like this: Should a website use its own public API?, second, sorry for my English. You can find the question summarized at the bottom of this question. What I want to achieve is a big website with a public api, so who like programming (like me) and likes my website, can replicate my website's data with a much better approach (of course with some restrictions). Almost everything could be used by the public API. Because of this, I was thinking about making the whole website AJAX driven. There would be parts of the API which would be limited only to my website (domain), like login, registering. There would be only an INTERFACE on the client side, which would use the public and private API to make this interface working. The website would be ONLY CLIENT SIDE, well, I mean, the website would only use AJAX to use the api. How do I imagine this? The website would be like a mobile application, the application only sending a request to a webserver, which returns a json, the application parses it, and uses it to advance in the application. (e.g.: login) My thoughts: Pros: The whole website is built up by javascript, this means I don't need to transfer the html to the client, saving bandwidth. (I hope so) Anyone can use up the data of my website to make their own cool things. (Is this a con or pro? O_O) The public API is always in use, so I can see if there are any error. Cons: Without Javascript the website is unusable. The bad guys easily can load the server with requesting too much data (like Request Per Second 10000), but this can be countered via limiting this with some PHP code and logging. Probably much more work So the question in some words is: Should I build my website around my own api? Is it good to work only on the client side? Is this good for a big website? (e.x.: facebook, yeah facebook is a different story, but could it run with an 'architecture' like this?)

    Read the article

  • For nodejs what are best design practices for native modules which share dependencies?

    - by Mark Essel
    Hypothetical situation, I have 3 node modules all native, A, B, and C.  A is a utilities module which exposes several functions to javascript through the node interface, in addition it declares/defines a rich set of native structures and functions B is a module which is dependent on data structures and source in A, it exposes some functions to javascript, in addition it declares/defines native structures and functions C is a module which is dependent on data structures and source in A & B, it exploses some functions to javascript, in addition it declares/defines native structures and functions So far when setting up these modules I have a preinstall script to install other dependent includes, but in order to access all of another modules source what is the best way to link to it's share library object (*.node) ? Is there an alternative best practice for native modules (such as installing all source from other modules before building that single module)? Reply

    Read the article

< Previous Page | 858 859 860 861 862 863 864 865 866 867 868 869  | Next Page >