Search Results

Search found 49554 results on 1983 pages for 'database users'.

Page 175/1983 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • Is there really Object-relational impedance mismatch?

    - by user52763
    It is always stated that it is hard to store applications objects in relational databases - the object-relational impedance mismatch - and that is why Document databases are better. However, is there really an impedance mismatch? And object has a key (albeit it may be hidden away by the runtime as a pointer to memory), a set of values, and foreign keys to other objects. Objects are as much made up of tables as it is a document. Neither really fit. I can see a use for databases to model the data into specific shapes for scenarios in the application - e.g. to speed up database lookup and avoid joins, etc., but won't it be better to keep the data as normalized as possible at the core, and transform as required?

    Read the article

  • Save BIG on Storage &mdash; with Oracle Advanced Compression

    - by [email protected]
    Recently, we published a podcast revealing just how much Oracle benefits from its internal use of Oracle Database 11g and Advanced Compression. With hundreds of TB and millions of dollars saved, Oracle Advanced Compression is dramatically reducing storage costs and substantially improving efficiency across the company. Now, here's your chance: Meet the experts, have your questions answered by them and immediately start using your storage more efficiently: On April 14th, join me for a live Webcast with Oracle's Tim Shetler, Vice President of Product Management and Bill Hodak, Principal Product Manager, to learn just how Oracle Advanced Compression can Reduce disk space requirements for all types of data Improve query and storage performance Lower storage costs throughout the datacenter Register here! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Creating symlink for Postgres

    - by Edwin
    As a developer, I often ssh right into my local database, just to test my application before pushing my code. However, I find that every time I want to access Postgres, I have to type in postgres@ubuntu:~$ /usr/local/pgsql/bin/psql test whereas on my work machine, all I have to do is type postgres@ubuntu:~$ psql --dbname=test --username=user I tried creating a symlink, which was successful, but whenever I try connecting to it through this shortcut, I get the following error message: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? How do I get this to work? In case it makes any difference, I'm using a self-compiled version of the 9.1.x series.

    Read the article

  • How can I get pictures from Shotwell on the notebook to the desktop?

    - by Meng Tian
    My father has a desktop and a notebook. The desktop is way more powerful and has RAID set up, so he uses it for editing and storing his pictures. But he spends several months a year away from home. So he has a notebook for this. He imports the pictures into shotwell on his computer. He likes to organise his pictures on the notebook already, by creating and naming the right events, deleting pictures and sometimes editing a bit. When he is back home, he wants to get his pictures on the desktop, without losing the names of the events in shotwell. Sadly, one cannot merge two shotwell databases, nor is it possible to import another database. Do you know any way, I can manage this? Or did I just miss something?

    Read the article

  • Oracle Developer Day: Die Oracle Datenbank in der Praxis

    - by A&C Redaktion
    Im neuen Jahr finden wieder Oracle Developer Days in verschiedenen Städten statt! In dieser speziell von den Database-Kollegen zusammengestellten Veranstaltung erfahren Sie viele Tipps und Tricks aus der Praxis und werden zu folgenden Themen auf den neuesten Stand gebracht: Die Unterschiede der Editionen und ihre Geheimnisse Umfangreiche Basisausstattung auch ohne Option Performance und Skalierbarkeit in den einzelnen Editionen Kosten- und Ressourceneinsparung leicht gemacht Sicherheit in der Datenbank Steigerung der Verfügbarkeit mit einfachen Mitteln Der Umgang mit großen Datenmengen Cloud Technologien in der Oracle Datenbank Ein Ausblick auf die Funktionen der für 2013 geplanten neuen Datenbank-Version rundet den Workshop ab. Termine, Agenda,Veranstaltungsorte und Anmeldung finden Sie hier. Melden Sie sich noch heute zur Veranstaltung an - die Teilnahme ist kostenlos!

    Read the article

  • Developing a live video-streaming website

    - by cawecoy
    I'm a computer science student and know a little about some technology to start developing my website, like PHP, RubyOnRails and Python, and MySQL and PostgreSQL for Database. I need to know what are the best (secure, stable, low-price, etc) to get started, based on my business information: My website will be a live video-streaming one, similar to livestream.com We need to provide a secure service for our customers. They need to have a page to create and configure their own Live-Streaming-Videos, get statistics, etc. We work with Wowza Media Server ruuning on an Apache Server In addition, I would like to know some good practices for this kind of website development, as I am new to this. Thanks in advance!

    Read the article

  • Regardig Vb.net [closed]

    - by user68999
    hi everyone i am making a project for school management and manage all the record of student so I'm using VB.net 2008 and sql server 2008. i have a problem when i enter student date of birth in text box with date&time picker in database. but i want in the search option enter the date from date&time picker and get age of all the student data in data-view grid. like student are 10 year old, 11 year old etc. so how it can possible please help..........?

    Read the article

  • mssql or mysql: learning

    - by Yehuda
    I have been using MySQL for about 9 months now for websites, and i have become quite good in getting what I want out of the Database. However i am still missing most of the complicated parts. I have an excellent tutorial but it is on sql-server 2008. 1) Is it worth me switching over to mssql (I understand the SQL is different) so that I will learn all about SQL and databases in general? 2) Do most people use MySQL or MSSQL 3) What is best practice, and I am talking mainly for websites.

    Read the article

  • Learn a NoSQL or become a badass with traditional RDMS - Where is/will the work be?

    - by beck
    I'm half way through my MSc and am thinking about my dissertation which I get 3 months to work on full time. Im very comfortable with the traditional Relational Database, the question is should I work on a project where I get a good understanding of something like Cassandra, or should I really push my RDMS knowledge to the limit. Getting great at something like MySQL is a solid safe option, will there really be much work for me with Cassandra in my tool belt? I would love to do either.... Thanks for your opinions and advice.

    Read the article

  • Breaking up a large PHP object used to abstract the database. Best practices?

    - by John Kershaw
    Two years ago it was thought a single object with functions such as $database->get_user_from_id($ID) would be a good idea. The functions return objects (not arrays), and the front-end code never worries about the database. This was great, until we started growing the database. There's now 30+ tables, and around 150 functions in the database object. It's getting impractical and unmanageable and I'm going to be breaking it up. What is a good solution to this problem? The project is large, so there's a limit to the extent I can change things. My current plan is to extend the current object for each table, then have the database object contain these. So, the above example would turn into (assume "user" is a table) $database->user->get_user_from_id($ID). Instead of one large file, we would have a file for every table.

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • Whats a good structure to save and retrieve locations of images?

    - by Goot
    I got a java-ee application, where I collect informations about movies. Im my backend I provide data like the name, description, genre and a random uuid. I also got lots of related files, which are stored on a file server. Including some screenshots, the dvd or bluRay cover and video trailers. My current approach is: When saving the files to the fileserver, I retrieve the movies random uuid (which is the primary key btw.). I then rename the files screenshot_[UUID]_1, screenshot_[UUID]_2 ... etc. Now, there are lots of other ways to handle this, like saving all filenames in a database or creating a dir structure on the fileserver for every uuid and, e.g., return all images in the "[uuid]/screenshots" folder via REST. I expect about 30k requests a day, so the service has to be pretty performant. Whats the best way to solve this?

    Read the article

  • Storing User-uploaded Images

    - by Nyxynyx
    What is the usual practice for handling user uploaded photos and storing them on the database and server? For a user profile image: After receiving the image file from user, rename file to <image_id>_<username> Move image to /images/userprofile Add img filename to a table users containing their profile details like first_name, last_name, age, gender, birthday For a image for a review done by user: After receiving the image file from user, rename file to <image_id>_<review_id> Move image to /images/reviews Add img filename to a table reviews containing their profile details like review_id, review_content, user_id, score. Question 1: How should I go about storing the image filenames if the user can upload multiple photos for a particular review? Serialize? Question 2: Or have another table review_images with columns review_id, image_id, image_filename just for tracking images? Will doing a JOIN when retriving the image_filename from this table slow down performance noticeably? Question 3: Should all the images be stored in a single folder? Will there be a problem when we have 100K photos in the same folder? Is there a more efficient way to go about doing this?

    Read the article

  • System that splits passwords across two servers

    - by Burning the Codeigniter
    I stumbled upon this news article on BBC, RSA splits passwords in two to foil hackers' attacks tl;dr - a (randomized) password is split in half and is stored across two separate servers, to foil hackers that gained access to either server upon a security breach. Now the main question is, how would this kind of system would be made... codespeaking, for PHP which I commonly develop on my web applications, the database password is normally stored in a configuration file, i.e. config.php with the username and password, in that case it is understandable that the passwords can be stolen if the security was compromised. However when splitting and sending the other half to the other server, how would this go on when making a communication to the other server (keeping in mind with PHP) since the other server password would be stored in a configuration file, wouldn't it? In terms of security is to keep the other server password away from the main one, just exactly how would the main server communicate, without exposing any other password, apart from the first server. This certainly makes me think...

    Read the article

  • c# display DB table structure

    - by user3529643
    I have a question. My code is the following : public partial class Form1 : Form { public OleDbConnection datCon; public string MyDataFile; public ArrayList tblArray; public ArrayList fldArray; public Form1() { InitializeComponent(); lvData.Clear(); lvData.View = View.Details; lvData.LabelEdit = false; lvData.FullRowSelect = true; lvData.GridLines = true; } private void DataConnection() { MyDataFile = Application.StartupPath + @"\studenti.mdb"; string MyCon = @"provider=microsoft.jet.oledb.4.0;data source=" + MyDataFile; try { datCon = new OleDbConnection(MyCon); } catch (Exception ex) { MessageBox.Show(ex.Message); } FillTreeView(); } private void GetTables(OleDbConnection cnn) { try { cnn.Open(); DataTable schTable = cnn.GetOleDbSchemaTable(OleDbSchemaGuid.Tables, new Object[] { null, null, null, "TABLE" }); tblArray = new ArrayList(); foreach (DataRow datrow in schTable.Rows) { tblArray.Add(datrow["TABLE_NAME"].ToString()); } cnn.Close(); } catch (Exception ex) { MessageBox.Show(ex.Message); } } private void GetFields(OleDbConnection cnn, string tabNode) { string tabName; try { tabName = tabNode; cnn.Open(); DataTable schTable = cnn.GetOleDbSchemaTable(OleDbSchemaGuid.Columns, new Object[] { null, null, tabName }); fldArray = new ArrayList(); foreach (DataRow datRow in schTable.Rows) { fldArray.Add(datRow["COLUMN_NAME"].ToString()); } cnn.Close(); } catch (Exception ex) { MessageBox.Show(ex.Message); } } private void FillTreeView() { tvData.Nodes.Clear(); tvData.Nodes.Add("Database"); tvData.Nodes[0].Tag = "RootDB"; GetTables(datCon); // add table node for (int i = 0; i < tblArray.Count; i++) { tvData.Nodes[0].Nodes.Add(tblArray[i].ToString()); tvData.Nodes[0].Nodes[i].Tag = "Tables"; } // add field node for (int i = 0; i < tblArray.Count; i++) { GetFields(datCon, tblArray[i].ToString()); for (int j = 0; j < fldArray.Count; j++) { tvData.Nodes[0].Nodes[i].Nodes.Add(fldArray[j].ToString()); tvData.Nodes[0].Nodes[i].Nodes[j].Tag = "Fields"; } } this.tvData.ContextMenuStrip = contextMenuStrip1; contextMenuStrip1.ItemClicked +=contextMenuStrip1_ItemClicked; } public void FillListView(OleDbConnection cnn, string tabName) { OleDbCommand cmdRead; OleDbDataReader datReader; string strField; lblTableName.Text = tabName; strField = "SELECT * FROM [" + tabName + "]"; // Initi cmdRead obiect cmdRead = new OleDbCommand(strField, cnn); cnn.Open(); datReader = cmdRead.ExecuteReader(); // fill ListView while (datReader.Read()) { ListViewItem objListItem = new ListViewItem(datReader.GetValue(0).ToString()); for (int c = 1; c < datReader.FieldCount; c++) { objListItem.SubItems.Add(datReader.GetValue(c).ToString()); } lvData.Items.Add(objListItem); } datReader.Close(); cnn.Close(); } private void ViewToolStripMenuItem_Click(object sender, EventArgs e) { DataConnection(); } public void tvData_AfterExpand(object sender, System.Windows.Forms.TreeViewEventArgs e) { string tabName; int fldCount; if (e.Node.Tag.ToString() == "Tables") { fldCount = e.Node.GetNodeCount(false); //column headers. int n = lvData.Width; double wid = n / fldCount; // width columnn for (int c = 0; c < fldCount; c++) { lvData.Columns.Add(e.Node.Nodes[c].Text, (int)wid, HorizontalAlignment.Left); } // gett table name tabName = e.Node.Text; FillListView(datCon, tabName); } } public void button1_Click(object sender, EventArgs e) { //TO DO?? } } I have a treeview populated with tables (nodes) from my database, and a listview which is populated with the data from my tables when I click on a table. As you can see I have a button1 on my form. When I click it I want it to display to me the structure of the table I selected in my treeview (a treeview node). Not too many details, just the name of the columns in my table, type of columns, primary keys. I've tried to follow many tutorials but I can t seem to manage it.

    Read the article

  • MySQL vs. SQL Server GoDaddy, What is the difference between hosted DB and App_Data Db

    - by Nate Gates
    I'm using GoDdady for site hosting, and I'm currently using MySQL, because there are less limits on size,etc. My question is what is the difference between using a hosted GoDaddy Db such as MySQL vs. creating a SQL Server database in the the App_Data folder? My guess is security? Would it be a bad idea to use a SQL ServerDB that's located in the App_Data folder? Additional Well I am able to create a .mdf (SQL Server DB file) in the App_Data folder, but I'm really unsure if should use that or not, If I did use it it would simplify using some of the Microsoft tools. Like I said my guess is that it would be less secure, but I don't really know. I know I have a 10gb, file system limit, so I'm assuming my db would have to share that space.

    Read the article

  • Immutable design with an ORM: How are sessions managed?

    - by Programmin Tool
    If I were to make a site with a mutable language like C# and use NHibernate, I would normally approach sessions with the idea of making them as create only when needed and dispose at request end. This has helped with keeping a session for multiple transactions by a user but keep it from staying open too long where the state might be corrupted. In an immutable system, like F#, I would think I shouldn't do this because it supposes that a single session could be updated constantly by any number of inserts/updates/deletes/ect... I'm not against the "using" solution since I would think that connecting pooling will help cut down on the cost of connecting every time, but I don't know if all database systems do connection pooling. It just seems like there should be a better way that doesn't compromise the immutability goal. Should I just do a simple "using" block per transaction or is there a better pattern for this?

    Read the article

  • How and what setting do you use in order to create your customized application with database? [closed]

    - by FullmetalBoy
    I have a homemade project combining with application and database. Today I'm using architecture N-tier. Totally I have 4 project in my solution in VS2010. The fourth project is a transaction layer that connect all the 3 project (one for presentation, business logic and database layer) in order to retrieve data from the database and all the way to present the data in presentation layer. Transaction layer contains entity framework 4 + customized class to carry data to the presentation layer. I always use LINQ to retrieve data in database layer. From my experience, everytime I use LINQ in relation with entity framework it always take a lot of time to retrieve data because what I believe is that my entity framework always has to reload everytime when I want to retrieve any data from the database. My question for you guys is: When you create your application connecting to a database, what architecture do you use? How do you retrieve the data from you application? Is it entity framework etc?

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • Point me to info about constructing filters (of lists)

    - by jah
    I would like some pointers to information which would help me understand how to go about providing the ability to filter a list of entities by their attributes as well as by attributes of related entities. As an example, imagine a web app which provides order management of some kind. Orders and related entities are stored in a relational database. And imagine that the app has an interface which lists the orders. The problem is: how does one allow the list to be filtered by, for example:- order number (an attribute) line item name (an attribute of a n-n related entity) some text in an administrative note related to the order (text found in an attribute of a 1-1 related entity) I'm trying to discover whether there is something like a standard, efficient way to construct the queries and the filtering form; or some possible strategies; or any theory on the topic; or some example code. My google foo fails me.

    Read the article

  • How Data Transfers differ on Smart Phones: Iphone vs. Android vs. Windows Phone

    - by MCH
    I am interested in how each individual smart phone is allowed to handle data transfers within a third-party app. I am interested in designing apps that allow customers to update, transfer, download, etc. data from their smart phone to their personal computer and vice-versa. (Ranging from just text, to XML, to a Relational Database) I only have experience with the Ipod Touch before and one particular app that maintained all the data on an online server, so to update the data on your pc or iphone you had to go online, are there other ways to do it? Like bluetooth, wireless LAN, USB, etc? I believe Apple has certain policies on this in order to control the App Store and individual Iphones. I suppose each company has a particular policy on how an app is allowed to transfer data to another system, does anyone have a good understanding of this? Thank you.

    Read the article

  • How Would You Design This Table?

    - by sooprise
    I have to create a table where each row needs to store 50 number values. Each row will always need to store 50 number values. If this was a smaller number of values, I would just make fields for each of the values, but because there are 50, this approach seems a bit cumbersome (but since it will always be 50 values, maybe this is the correct approach?). Is there a way to store an array of values in a field? This seems like a nice solution, but the concept is almost identical to creating a relational database.

    Read the article

  • Alternatives to sql like databases

    - by user613326
    Well i was wondering these days computers usually have 2GB or 4GB memory I like to use some secure client server model, and well an sql database is likely candidate. On the other hand i only have about 8000 records, who will not frequently be read or written in total they would consume less then 16 Megabyte. And it made me wonder what would be good secure options in a windows environment to store the data work with it multi-client single server model, without using SQL or mysql Would for well such a small amount of data maybe other ideas better ? Because i like to keep maintenance as simple as possible (no administrators would need to know sql maintenance, as they dont know databases in my target environment) Maybe storing in xml files or.. something else. Just wonder how others would go if ease of administration is the main goal. Oh and it should be secure to, the client server data must be a bit secure (maybe NTLM files shares https or...etc)

    Read the article

  • How to get experience in large scale databases?

    - by Justin
    I have written applications that are very small scale and the code I write works fine for them. But I have often wondered how the server side code I write would scale up from 100s of queries per day to millions. Also when looking at possible jobs/projects, people are often looking for developers with experience in this sort of high traffic database design so I would at least like to be able to say, I havent gotten to work on a project that was this popular, but I at least have tried to simulate it. Are there tools or frameworks that can generate a lot of traffic or at least simulate what would happen with traffic on different orders of magnitude so I could get some practice writing optimized code for higher traffic applicaitons?

    Read the article

  • What are the benefits of NoSQL?

    - by geekbrit
    I'm struggling to see how NoSQL brings any advantages to a system, so I'm interested in hearing from people who have chosen to use it, both the reasons they chose NoSQL, and positive and negative experiences in implementation and use. My first impressions are that NoSQL is a product of the availability of very large, very cheap storage; it seems that a million record database could easily have a 100MByte overhead in field labels embedded in the records. This goes against one of my software design instincts - remove redundancy in code and data whenever practical. However, NoSQL is being used with success in large high-traffic systems, so I must be missing something, looking forward to your responses.

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >