Search Results

Search found 40870 results on 1635 pages for 'database design'.

Page 296/1635 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • Speaking About SQL Server

    - by AllenMWhite
    There's a lot of excitement in the SQL Server world right now, with the RTM (Release to Manufacturing) release of SQL Server 2012 , and the availability of SQL Server Data Tools (SSDT) . My personal speaking schedule has exploded as well. Just this past Saturday I presented a session called Gather SQL Server Performance Data with PowerShell . There are a lot of events coming up, and I hope to see you at one or more of them. Here's a list of what's scheduled so far: First, I'll be presenting a session...(read more)

    Read the article

  • VirtualBox Port Fowarding to Connect to PostgreSQL Database

    - by kliao
    I'm trying to connect to a PostgreSQL database hosted on a Win7 guest from a Win7 host. I've configured security in pg_hba.conf host all all 127.0.0.1/32 md5 host all all 10.0.2.15/32 md5 host all all 192.168.1.6/32 md5 and set the listen_addresses setting in postgresql.conf to '*'. I think I've set up port forwarding correctly as I see: Key: VBoxInternal/Devices/e1000/0/LUN#0/Config/win7_vm1/GuestPort, Value: 5432 Key: VBoxInternal/Devices/e1000/0/LUN#0/Config/win7_vm1/HostPort, Value: 5432 Key: VBoxInternal/Devices/e1000/0/LUN#0/Config/win7_vm1/Protocol, Value: TCP when I call getextradata. This is similar to http://serverfault.com/questions/106168/cant-connect-to-postgresql-on-virtualbox-guest but I'm not sure what I'm doing wrong. In the vbox.log file I see: 00:00:01.019 NAT: set redirect TCP host port 5432 = guest port 5432 @ 10.0.2.15 00:00:01.033 NAT: failed to redirect TCP 5432 = 5432 but I'm not sure how to fix that. Any ideas? Thanks.

    Read the article

  • How should we deal with multiple transaction-report requests?

    - by Mithir
    We are developing a system for the retail market which one of it's features will enable clients(actually consumer clubs) to go through all transactions made by end-clients. One of the ways to get this information will be via an API. The idea is that there will be requests for reports with a start date and an end date, and a response will have all the transactions between those dates. We are worry that some reports may be very large, and that some clients will repeatedly request for reports, in this case the DB and CPU will be very overloaded. The same server that will service those requests, also takes care the the actual retail transactions (received by proprietary devices) and a Web application. We are not sure about how to limit the report requests from the API so that it won't affect the system too much. So, how should we deal with this scenario? any thoughts? EDIT: just to make clear: When I mentioned proprietary devices I meant "On-Location" devices which are used during sales with end-clients, this means that these requests shouldn't get delayed, and this is the main concern.

    Read the article

  • Grid Style. Is It The Next Big Thing In Web Design?

    Inspired by typographic magazine layout grids more and more web designers are starting to embrace grid-based design siting cleaner and more easily digestible web pages as a benefit. The concept of ... [Author: Michiel Van Kets - Web Design and Development - June 17, 2010]

    Read the article

  • SQL Server 2012 and SQLMail - will it still work?

    - by Kharlos Dominguez
    We are considering upgrading our SQL Server, which is currently running 2005. We use SQLMail heavily in the organization, both to send e-mails and to import some into a database. I've read on various places that SQLMail was deprecated and superseded by "Database Mail". I'm confused because this MS page: http://msdn.microsoft.com/en-us/library/bb402904.aspx seems to imply that it would still work? I understand the dangers of SQLMail but we do not have the resources to rewrite the scripts right now and would prefer to do it later on. Does SQLMail still work in 2012, and if not, how easy is it to replace with Database Mail, both for reading and sending e-mails?

    Read the article

  • Converting large files in python

    - by Cenoc
    I have a few files that are ~64GB in size that I think I would like to convert to hdf5 format. I was wondering what the best approach for doing so would be? Reading line-by-line seems to take more than 4 hours, so I was thinking of using multiprocessing in sequence, but was hoping for some direction on what would be the most efficient way without resorting to hadoop. Any help would be very much appreciated. (and thank you in advance) EDIT: Right now I'm just doing a for line in fd: approach. After that right now I just check to make sure I'm picking out the right sort of data, which is very short; I'm not writing anywhere, and it's taking around 4 hours to complete with that. I can't read blocks of data because the blocks in this weird file format I'm reading are not standard, it switches between three different sizes... and you can only tell which by reading the first few characters of the block.

    Read the article

  • How SSD hard drive affected speed of your website (asp.net/linq/ms sql database)

    - by Sergey Osypchuk
    I have a small database (<1G) But we have a lot of complex logi? in website and client complains on render time, which is 3-5 seconds. We are not google, and thousands of users a day is our dream, so size is not a problem, but speed is important. Can anybody share with experience with SSD drives for ASP.NET (MVC)/LINQ/MS SQL based application ? How you performance increased? UPDATE: this whitepaper states that it will be 20 times faster. http://www.texmemsys.com/files/f000174.pdf

    Read the article

  • Does my approach for building a real time monitoring system make sense? [closed]

    - by sameer
    I am developing an application that will display a dashboard that will display data from different SQL databases. This needs to happen in almost real time, our refresh time is about 5 minutes. My approach so far is: Develop a Windows service to accumulate the data from various SQL Server instances. Persist those details into a SQL DB, from which the dashboard will display them on the web page. Trigger fetching of data from the Windows service will every x minutes. The details of the SQL Server instances will be stored in the SQL DB which the Windows service will be referring. Does my approach make sense?

    Read the article

  • Should a primary key be immutable?

    - by Vincent Malgrat
    A recent question on stackoverflow provoked a discussion about the immutability of primary keys. I had thought that it was a kind of rule that primary keys should be immutable. If there is a chance that some day a primary key would be updated, I thought you should use a surrogate key. However it is not in the SQL standard and some RDBMS' "cascade update" feature allows a primary key to change. So my question is: is it still a bad practice to have a primary key that may change ? What are the cons, if any, of having a mutable primary key ?

    Read the article

  • choosing the right RAID level for PostgresQL database

    - by Sergey
    Hi, I got an disk array appliance of 8 disks 1T each (UltraStor RS8IP4). It will be used solely by PostgresQL database and I am trying to choose the best RAID level for it. The most priority is for read performance since we operate large data sets (tables, indexes) and we do lots of searches/scans. With the old disks that we have now the most slowdowns happen on SELECTs. Fault tolerance is less important, it can be 1 or 2 disks. Space is the least important factor. Even 1T will be enough. Which RAID level would you recommend in this situation. The current options are 60, 50 and 10, but probably other options can be even better.

    Read the article

  • Speaking - 24 Hours of PASS, Summit Preview Edition

    - by AllenMWhite
    There's so much to learn to be effective with SQL Server, and you have an opportunity to immerse yourselves in 24 hours of free technical training this week from PASS, via the 24 Hours of PASS event. I'll be presenting an introductory session on PowerShell called PowerShell 101 for the SQL Server DBA . Here's the abstract: The more you have to manage, the more likely you'll want to automate your processes. PowerShell is the scripting language that will make you truly effective at managing lots of...(read more)

    Read the article

  • Lingering database-connections from Feng Office

    - by Bobby
    I've installed Feng Office on our main server which is working perfectly so far. Unfortunately it seems like there's a problem with the connection to the MySQL-Database. While the connection itself works fine, it's the reuse/pooling of connections which seems to be bugged. There are lingering/sleeping connections to the server from Feng Office which won't close and don't get reused after some time (120 seconds). Of course those lingering processes/connections are piling up pretty fast. I've found a thread at the forums about this behavior, but the suggested fix is already applied (by default). I'm sure this is just a configuration issue, but I'm a little clue less because Feng is besides a MediaWiki, a DokuWiki and homebrewed PHP applications the only one with this issue. The setup is a Microsoft Windows 2003 Server with MySQL 5.0.26 and Apache 2.2. Where can I start looking for clues why this is happening and how do I get rid of lingering MySQL-Connections?

    Read the article

  • Data Synchronization in mobile apps - multiple devices, multiple users

    - by ProgrammerNewbie
    I'm looking into building my first mobile app. One of the core features of the application is that multiple devices/users will have access to the same data -- and all of them will have CRUD rights. I believe the architecture should involve a central server where all the data is stored. The devices will use an API to interact with the server to perform its data operations (e.g. adding a record, editing a record, deleting a record). I imagine a scenario where synchronizing the data will become a problem. Assume the application should work when it is not connected to the Internet, and thus cannot communicate with this central server. So: User A is offline and edits record #100 User B is offline and edits record #100 User C is offline and deletes record #100 User C goes online (presumably, record #100 should get deleted on the server) User A and B goes online, but the records they edited no longer exist All sorts of scenarios similar to the above can come up. How is this generally handled? I plan to use MySQL, but am wondering if it's not appropriate for such a problem.

    Read the article

  • The Mindset of the Enterprise DBA: Creating and Applying Standards to Our Work

    Although many professions, such as pilots, surgeons and IT administrators, require judgement and skill, they also require the ability to do many repeated standard procedures in a consistent and methodical manner. These procedures leave little room for creativity since they must be done right, and in the right order. For DBAs, standardization involves providing and following checklists, notes and instructions so that the results are predictable, correct and easy to maintain

    Read the article

  • T-SQL Tuesday #016:Check Your Service Accounts with PowerShell

    - by AllenMWhite
    T-SQL Tuesday #016:Check Your Service Accounts with PowerShell This T-SQL Tuesday is about Aggregate Functions. It may be a bit of a stretch, but a security best practice to use separate service accounts for all your SQL Server services, so I've written some PowerShell code to check to see if any account is used more than once on a given machine. I take advantage of the SQLWmiManagement DLL to find the SQL Server services, which is a safer bet than filtering on a service name. First I load the SQLWmiManagement...(read more)

    Read the article

  • TechEd 2010 Followup

    - by AllenMWhite
    Last week I presented a couple of sessions at Tech Ed NA in New Orleans. It was a great experience, even though my demos didn't always work out as planned. Here are the sessions I presented: DAT01-INT Administrative Demo-Fest for SQL Server 2008 SQL Server 2008 provides a wealth of features aimed at the DBA. In this demofest of features we'll see ways to make administering SQL Server easier and faster such as Centralized Data Management, Performance Data Warehouse, Resource Governor, Backup Compression...(read more)

    Read the article

  • What is the best way to work with large databases in Java depending on context?

    - by user19000
    We are trying to figure out the best practice for working with very large DBs in Java. What we do is a kind of BI (business Intelligence), i.e analyzing very large DBs, and using them to create intermediate DBs that represent intelligent knowledge of the DBs. We are currently using JDBC, and just preforming queries using a ResultSet. As more and more data is being created, we are wondering whether more appropriate ways exist for parsing and manipulating these large DBs: We need to support 'chunk' manipulation and not an entire DB at once(e.g. limit in JDBC, very poor performance) We do not need to be constantly connected since we are just pulling results and creating new tables of our own. We want to understand JDBC alternatives, with respect to advantages and disadvantages. Whether you think JDBC is the way to go or not, what are the best practices to go by depending on context (e.g. for large DBs queried in chunks)?

    Read the article

  • zero downtime during database scheme upgrade on SQL 2008

    - by eject
    I have web application on IIS7 with SQL server 2008 as RDBMS. Need get 0 downtime during future upgrades of ASP.NET code and DB schema as well. I need to get right scenario for this. I have 2 web servers and 2 sql servers and one http load balancer whcih allows to switch web backend server for web requests. Main goal is to make 1st web server and DB server up and running, update code and db schema on 2nd server and then switch all the requests to 2nd server and then main problem - how to copy data from 1st database 2nd (which was changed during upgrade).

    Read the article

  • Pommes für alle?

    - by A&C Redaktion
    Ja, liebe Partner - wie Sie sich und Ihre Kunden vor ungewollten Zugriffen schützen, dazu gibt es nun einen charmanten Video-Clip, der in nur einer Minute den Sprung von den Pommes zur Oracle Access Management Suite schafft. Eine spielerische Hinführung zum Thema Zugriffsrechte, die sich mit ihrem gelungenen Überraschungseffekt auch hervorragend im Kundengespräch nutzen lässt. Gleich anschauen, „gefällt mir“ klicken - weiterempfehlen und verlinken! Weiterführende Informationen zum Access Management Portfolio sind online verfügbar:http://www.oracle.com/us/products/middleware/identity-management/access-management/overview/index.htmlAuch auf die derzeit am Markt besprochenen Themen zu Mobile&Social hat Oracle eine neue Antwort:http://www.oracle.com/technetwork/middleware/id-mgmt/overview/oamms-1696162.htmlEin weiteres sehenswertes Video finden Sie hier:http://www.oracle.com/us/products/middleware/identity-management/oiam/overview/index.html

    Read the article

  • MySQLwith mutiple threads and processes

    - by Abhan
    I'm developing a telecom messaging platform in C, and I'm going to need multiple processes to be working with MySQL DB. How can I make two processes read/write to/from a Mysql DB and, if/when one of them goes down, get the other to seamlessly take over the work until the dead process gets back to work? I was thinking/googling some options and am stuck in place where I don't know which one to choose. What I think so far is that table lock is not the best option to go for, as it will stall the other process until the table is unlocked. The other option is to use row-level locks or manual locks, but I can't find the best way to do it.

    Read the article

  • Saving all hits to a web app

    - by bevanb
    Are there standard approaches to persisting data for every hit that a web app receives? This would be for analytics purposes (as a better alternative to log mining down the road). Seems like Redis would be a must. Is it advisable to also use a different DB server for that table, or would Redis be enough to mitigate the impact on the main DB? Also, how common is this practice? Seems like a no brainer for businesses who want to better understand their users, but I haven't read much about it.

    Read the article

  • MySQL with mutiple threads and processes

    - by Abhan
    I'm developing a telecom messaging platform in C, and I'm going to need multiple processes to be working with MySQL DB. How can I make two processes read/write to/from a Mysql DB and, if/when one of them goes down, get the other to seamlessly take over the work until the dead process gets back to work? I was thinking/googling some options and am stuck in place where I don't know which one to choose. What I think so far is that table lock is not the best option to go for, as it will stall the other process until the table is unlocked. The other option is to use row-level locks or manual locks, but I can't find the best way to do it.

    Read the article

  • Is there a simple, flat, XML-based query-able data storage solution? [closed]

    - by alex gray
    I have been in long pursuit of an XML-based query-able data store, and despite continued searches and evaluations, I have yet to find a solution that meets the my needs, which include: Data is wholly contained within XML nodes, in flat text files. There is a "native" - or at least unobtrusive - method with which to perform Create/Read/Update/Delete (CRUD) operations onto the "schema". I would consider access via http, XHR, javascript, PHP, BASH, or PERL to be unobtrusive, dependent on the complexity of the set of dependencies. Server-side file-system reads and writes. A client-side interface element, accessible in any browser without a plug-in. Some extra, preferred (but optional) requirements include: Respond to simple SQL, or similarly syntax queries. Serve the data on a bare bones https server, with no "extra stuff", either via XMLHTTPRequest, HTTP proper, or JSON. A few thoughts: What I'm looking for may be possible via some Java server implementations, but for the sake of this question, please do not suggest that - unless it meets ALL the requirements. Java, especially on the client-side is not really an option, nor is it appealing from a development viewpoint.* I know walking the filesystem is a stretch, and I've heard it's possible with XPATH or XSLT, but as far as I know, that's not ready for primetime, nor even yet a recommendation. However the ability to recursively traverse the filesystem is needed for such a system to be of useful facility. At this point, I have basically implemented what I described via, of all things, CGI and Bash, but there has to be an easier way. Thoughts?

    Read the article

  • Need advice concerning Feature Based Development when knowledge DB is involved

    - by voroninp
    We develop BackOffice application which is used to edit our knowledge DB. Now our main product's development team is shifting to the feature based development and we need to support several DB's with not identical data schemes. (DS changes slightly from DB to DB) The information from knowledge Db is extracted by the script and then is distributed to the clients. We also need to support merging these DB's. We now analyze pros and cons of different approaches. We discuss this one: One working DB (WDB) with one DB for each feature branch (FDB). The approved data is moved from WDB to FDB. So we need to support only one script for each branch. This script will extract data from corresponding FDB. Nevertheless we are to code the differences between FDBs and WDB manually. May be some automatic mapping tools exist? I also wish to know whether classic solutions to the alike problems already exist. Can anyone share the best practices for this case?

    Read the article

  • Upload large database SQL file

    - by Devy
    I've a database of more than 20Gb of size on my hard disk. What is the best way to upload it with the least (money) load possible on the server? - I'm on Windows 7. - I have FTP and SSH access on the server. I avoid using FTP because my connection cuts off a lot, I can't imagine I re-upload again the file after failing on 99%. I found some tools that split the large .sql file to small .sql files, but they didn't mention how to gather these files again into one file. Another way is to archive the big .sql file to .rar with -v option, upload them through FTP then unpack them. But unpacking will also cost, right? I know it will cost in any cases, but any best practice will be strongly appreciated.

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >