Search Results

Search found 51747 results on 2070 pages for 'oracle database in memory'.

Page 138/2070 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • windows 2008 r2 iis worker proccess memory usage increase

    - by nLL
    I have this web site written in c#. around 400-500 users online at any time. it was on windows 2008 32 bit machine before and never ever locked/slowed down due to increased memory consumption up until i upgraded it's server to win 2008 r2 64 bit. Old server had only 4 gig ram and quad core cpu at 2ghz. site was working just fine. since i've upgraded the server i noticed (2 times with in 10 days) it started to eat ram. last night it went up to 4 gb ram. with ram increase response slows down quite a lot. recycling app pool doesn't help. I have to restart it's worker process to recover. i've noticed this usually happens if there are continuous errors. as i didn't change anything in the code am i safe to assume it is not related to memory leak in the code? did anyone came across something like that? same thing happens if i create continuous errors with classic asp. thanks

    Read the article

  • 4GB Memory Upgrade for Acer Aspire 5102WLMi

    - by Richard Slater
    I have bought a 4GB memory upgrade (2x 2GB PC2-5300 SODIMM) for my Acer Aspire 5102WLMi (Aspire 5100 Series) laptop, I installed the two memory modules correctly however with 4GB installed the laptop refuses to POST. I have tried the following: Tried both 2GB SODIMMs without the other (Worked Fine) Tried the original 512MB SODIMMs (Worked Fine) Tried with original 512MB SODIMM and new 2GB SODIMM (Worked Fine) Tried swapping over the 2GB SODIMs (Didn't Boot) Left the computer for 10 minutes with both 2GB SODIMMs installed (Didn't Boot) Checked latest BIOS installed (No Change) The Crucial website said that the laptop supported 4GB of RAM as do several other sites through found through Google, up until now I was fairly confident this would work. Couple of questions that would be good to have answered: Question: Has anyone got an Acer Aspire 5100 Series running with 4GB RAM? Answer: Yes, I have now got one working with 3.75GB Usable, the rest is occupied utilized by the Graphics Card. Question: Any tips on getting this to work; is there a CMOS reset switch? Answer: Yes there is, if both SODIMMs are removed two very small interlocking PCB tracks are revealed. If these are shorted together with a screwdriver the BIOS will be reset. Thanks.

    Read the article

  • Allocating More Than 4 GB Of Memory

    - by TPatti
    I am facing an issue with memory allocation. I have: Host OS: Microsoft Windows XP - Professional x64 Edition - Version 2003 - Service Pack 2. Host Physical Memory: 8 GB Guest OS: Red Hat Enterprise Linux WS release 4 (Nahant Update 5). I am not sure if it is 32 or 64 bits. The lsb_release -a command says that argument LSB Version: core-3.0-ia32, so I guess that would be 32 bits... VMware Player Version: 2.5.2 build-156735 I would like that VMware Player could allocate more that 4 GB, but when I go to the setting, it only lists 4 GB. If I choose the "About" option, it actually says that I have 8 GB installed in the host machine. This VMware image created by someone else and provided to me, apparently done with VMware Workstation 5. Why can't I allocate 8 GB? Where is the problem? In the WMware Player Version, Guest OS or Host OS? How can I solve this? I understand that for this version of player there isn't one version for 32 and another for 64 bits.

    Read the article

  • Using a "white list" for extracting terms for Text Mining, Part 2

    - by [email protected]
    In my last post, we set the groundwork for extracting specific tokens from a white list using a CTXRULE index. In this post, we will populate a table with the extracted tokens and produce a case table suitable for clustering with Oracle Data Mining. Our corpus of documents will be stored in a database table that is defined as create table documents(id NUMBER, text VARCHAR2(4000)); However, any suitable Oracle Text-accepted data type can be used for the text. We then create a table to contain the extracted tokens. The id column contains the unique identifier (or case id) of the document. The token column contains the extracted token. Note that a given document many have many tokens, so there will be one row per token for a given document. create table extracted_tokens (id NUMBER, token VARCHAR2(4000)); The next step is to iterate over the documents and extract the matching tokens using the index and insert them into our token table. We use the MATCHES function for matching the query_string from my_thesaurus_rules with the text. DECLARE     cursor c2 is       select id, text       from documents; BEGIN     for r_c2 in c2 loop        insert into extracted_tokens          select r_c2.id id, main_term token          from my_thesaurus_rules          where matches(query_string,                        r_c2.text)>0;     end loop; END; Now that we have the tokens, we can compute the term frequency - inverse document frequency (TF-IDF) for each token of each document. create table extracted_tokens_tfidf as   with num_docs as (select count(distinct id) doc_cnt                     from extracted_tokens),        tf       as (select a.id, a.token,                            a.token_cnt/b.num_tokens token_freq                     from                        (select id, token, count(*) token_cnt                        from extracted_tokens                        group by id, token) a,                       (select id, count(*) num_tokens                        from extracted_tokens                        group by id) b                     where a.id=b.id),        doc_freq as (select token, count(*) overall_token_cnt                     from extracted_tokens                     group by token)   select tf.id, tf.token,          token_freq * ln(doc_cnt/df.overall_token_cnt) tf_idf   from num_docs,        tf,        doc_freq df   where df.token=tf.token; From the WITH clause, the num_docs query simply counts the number of documents in the corpus. The tf query computes the term (token) frequency by computing the number of times each token appears in a document and divides that by the number of tokens found in the document. The doc_req query counts the number of times the token appears overall in the corpus. In the SELECT clause, we compute the tf_idf. Next, we create the nested table required to produce one record per case, where a case corresponds to an individual document. Here, we COLLECT all the tokens for a given document into the nested column extracted_tokens_tfidf_1. CREATE TABLE extracted_tokens_tfidf_nt              NESTED TABLE extracted_tokens_tfidf_1                  STORE AS extracted_tokens_tfidf_tab AS              select id,                     cast(collect(DM_NESTED_NUMERICAL(token,tf_idf)) as DM_NESTED_NUMERICALS) extracted_tokens_tfidf_1              from extracted_tokens_tfidf              group by id;   To build the clustering model, we create a settings table and then insert the various settings. Most notable are the number of clusters (20), using cosine distance which is better for text, turning off auto data preparation since the values are ready for mining, the number of iterations (20) to get a better model, and the split criterion of size for clusters that are roughly balanced in number of cases assigned. CREATE TABLE km_settings (setting_name  VARCHAR2(30), setting_value VARCHAR2(30)); BEGIN  INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.clus_num_clusters, 20);  INSERT INTO km_settings (setting_name, setting_value)     VALUES (dbms_data_mining.kmns_distance, dbms_data_mining.kmns_cosine);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.prep_auto,dbms_data_mining.prep_auto_off);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_iterations,20);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_split_criterion,dbms_data_mining.kmns_size);   COMMIT; END; With this in place, we can now build the clustering model. BEGIN     DBMS_DATA_MINING.CREATE_MODEL(     model_name          => 'TEXT_CLUSTERING_MODEL',     mining_function     => dbms_data_mining.clustering,     data_table_name     => 'extracted_tokens_tfidf_nt',     case_id_column_name => 'id',     settings_table_name => 'km_settings'); END;To generate cluster names from this model, check out my earlier post on that topic.

    Read the article

  • The Internet of Things & Commerce: Part 3 -- Interview with Kristen J. Flanagan, Commerce Product Management

    - by Katrina Gosek, Director | Commerce Product Strategy-Oracle
    Internet of Things & Commerce Series: Part 3 (of 3) And now for the final installment my three part series on the Internet of Things & Commerce. Post one, “The Next 7,000 Days”, introduced the idea of the Internet of Things, followed by a second post interviewing one of our chief commerce innovation strategists, Brian Celenza.  This final post in the series is an interview with Kristen J. Flanagan, lead product manager for Oracle Commerce omnichannel strategy. She takes us through the past, present, and future of how our Commerce Solution is re-imagining the way physical and digital shopping come together. ------- QUESTION: It’s your job to stay on top of what our customers’ need to not only run their online businesses effectively, but also to make sure they have product capabilities they can innovate and grow on. What key trend has been top-of-mind for you and our customers around this collision of physical and digital shopping? Kristen: I’ll agree with Brian Celenza that hands down mobile has forced a major disruption in shopping and selling behavior. A few years ago, mobile exploded at a pace I don't think anyone was expecting. Early on, we saw our customers scrambling to establish a mobile presence---mostly through "screen scraping" technologies. As smartphones continued to advance (at lightening speed!), our customers started to investigate ways to truly tap in to their eCommerce capabilities to deliver the mobile experience. They started looking to us for a means of using the eCommerce services and capabilities to deliver a mobile experience that is tailored for mobile rather than the desktop experience on a smaller screen. In the future, I think we'll see customers starting to really understand what their shoppers need and expect from a mobile offering and how they can adapt their content and delivery of that content to meet those needs. And, mobile shopping doesn’t stop at the consumer / buyer. Because the in-store experience is compelling and has advantages that digital just can't offer, we're also starting to see the eCommerce services being leveraged for mobile for in-store sales associates. Brick-and-mortar retailers are interested in putting the omnichannel product catalog, promotions, and cart into the hands of knowledgeable associates. Retailers are now looking to connect and harness the eCommerce data in-store so that shoppers have a reason to walk-in. I think we'll be seeing a lot more customers thinking about melding the in-store and digital experiences to present a richer offering for shoppers.    QUESTION: What are some examples of what our customers are doing currently to bring these concepts to reality? Kristen: Well, without question, connecting digital and brick-and-mortar worlds is becoming tablestakes for selling experiences. If a brand has a foot in both worlds (i.e., isn’t a pureplay online retailer), they have to connect the dots because shoppers – whether consumers or B2B buyers –don't think in clearly defined channels anymore. The expectation is connectedness – for on- and offline experiences, promotions, products, and customer data. What does this mean practically for businesses selling goods on- and offline? It touches a lot of systems: inventory info on the eCommerce site, fulfillment options across channels (buy online/pickup in store), order information (representing various channels for a cohesive view of shopper order history), promotions across digital and store, etc.  A few years ago, the main link between store and digital was the smartphone. We all remember when “apps” became a thing and many of our customers were scrambling to get a native app out there. Now we're seeing more strategic thinking around the benefits of mobile web vs. native and how that ties in to the purpose and role of mobile within the digital channel. Put it more broadly, how these pieces fit together in the overall brand puzzle.  The same could be said for “showrooming.” Where it was a major concern (i.e., shoppers using stores to look at merchandise and then order online from Amazon), in recent months, it’s emerged that the inverse is now becoming a a reality as well. "Webrooming" (using digital sites to do research before making a purchase in the store) is a new behavior pure play retailers are challenged with. There are many technologies, behaviors, and information that need to tie together to offer a holistic omnichannel shopping experience. As a result, brands are looking for ways to connect the digital and in-store experiences to bridge the gaps: shared assortments across channels, assisted selling apps that arm associates with information about shoppers, shared promotions, inventory, etc. QUESTION: How has Oracle Commerce been built to help brands make the link between in-store and digital over the last few years? Kristen: Over the last seven years, the product has been in step with the changes in industry needs. Here is a brief history of the evolution: Prior to Oracle’s acquisition of ATG and Endeca, key investments were made to cross-channel functionality that we are still building on today. Commerce Service Center (v2007.1) ATG introduced the Commerce Service Center in 2007.1 and marked the first entry into what was then called “cross-channel.” The Commerce Service Center is a call-center-agent-facing application that enables agents to see shopper orders, online catalog, promotions, and pricing. It is tightly integrated with the eCommerce capabilities of the platform and commerce engine and provided a means of connecting data from the call center and online channels.  REST services framework (v9.1)  In v9.1 we introduced the REST services framework and interface in the Platform that enabled customers to use ATG web services in other applications. This framework has become the basis for our subsequent omni-channel features and functionality. Multisite Architecture (v10) With the v10 release, we introduced the Multisite Architecture, which enabled customers to manage multiple sites (and channels) within a single instance of the BCC. Customers could create site- and channel-specific catalogs, promotions, targeters, and scenarios. Endeca Page Builder (2.x) / Experience Manager (3.x) With the introduction of Endeca for Mobile (now part of the core platform, available through the reference store – see blow) on top of Page Builder (and then eventually Experience Manager), Endeca gave business users the tools to create and manage native and mobile web applications. And since the acquisition of both ATG (2011) and Endeca (2012), Oracle Commerce has leveraged the best of each leading technology’s capabilities for omnichannel commerce to continue to drive innovation for our customers. Service enablement of core Oracle Commerce capabilities (v10.1.1, 10.2, & 11) After the establishment of the REST services framework and interface, we followed up in subsequent releases with service enablement of core Oracle Commerce capabilities throughout the iOS native app and the enablement of the core Commerce Service Center features. The result is that customers can leverage these services for their integrations with other systems, as well as their omnichannel initiatives.  Mobile web reference application (v10.1) In 10.1 we introduced the shopper-facing mobile reference application that showed how to use Oracle Commerce to deliver a mobile web experience for shoppers. This included the use of Experience Manager and cartridges to drive those experiences on select pages.  Native (iOS) reference application (v10.1.1)  We came out with the 10.1.1 shopper-facing native iOS ref app that illustrated how to use the Commerce REST services to deliver an iOS app. Also included Experience Manager-driven pages.   Assisted Selling reference application (v10.2.1)  The Assisted Selling reference application is our first reference application designed for the in-store associate. This iOS app shows customers how they can use Oracle Commerce data and information to provide a high-touch, consultative sales environment as well as to put the endless aisle into hands of their associates. Shoppers can start a cart online, and in-store associates can access that cart via the application to provide more information or add products and then transact using the ATG engine. Support for Retail promotions (v11) As part of the v11 release, we worked with teams in the Oracle Retail Global Business Unit (RGBU) to assess which promotion types and capabilities are supported across our products. Those products included Oracle Commerce, Oracle Point of Service (ORPOS), and Oracle Retail Price Management (RPM). The result is that customers can now more easily support omnichannel use cases between the store and digital.  Making sure Oracle Commerce can help support the omnichannel needs of our customers is core to our product strategy. With 89% of consumers now use two or more channels to make a single purchase, ensuring that cross-channel interactions are linked is critical to a great customer experience – and to sales. As Oracle Commerce evolves, we want to make it simple for organizations to create, deliver, and scale experiences across touchpoints with our create once, deploy commerce anywhere framework. We have a flexible, services-oriented architecture that allows data, content, catalogs, cart, experiences, personalization, and merchandising to be shared across touchpoints and easily extended in to new environments like mobile, social, in-store, Call Center, and new Websites. [For the latest downloads and Oracle Commerce documentation, please visit the Oracle Technical Network.] ------ Thank you to both Brian and Kristen for their contributions and to this blog series and their continued thought leadership for Oracle Commerce. We are all looking forward to the coming years of months of new shopping behaviors and opportunities to innovate. Because – if the digital fabric of our everyday lives continues to change at the same pace – the next five years (that just under 2,000 days), will be dramatic. ---------- THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT

    Read the article

  • How to solve QPixmap::fromImage memory leak?

    - by dodoent
    Hello everyone! I have a problem with Qt. Here is a part of code that troubles me: void FullScreenImage::QImageIplImageCvt(IplImage *input) { help=cvCreateImage(cvGetSize(input), input->depth, input->nChannels); cvCvtColor(input, help, CV_BGR2RGB); QImage tmp((uchar *)help->imageData, help->width, help->height, help->widthStep, QImage::Format_RGB888); this->setPixmap(QPixmap::fromImage(tmp).scaled(this->size(), Qt::IgnoreAspectRatio, Qt::SmoothTransformation)); cvReleaseImage(&help); } void FullScreenImage::hideOnScreen() { this->hide(); this->clear(); } void FullScreenImage::showOnScreen(IplImage *slika, int delay) { QImageIplImageCvt(slika); this->showFullScreen(); if(delay>0) QTimer::singleShot(delay*1000, this, SLOT(hideOnScreen())); } So, the method showOnScreen uses private method QImageIplImageCvt to create QImage from IplImage (which is used by the openCV), which is then used to create QPixmap in order to show the image in full screen. FullScreenImage class inherits QLabel. After some delay, the fullscreen picture should be hidden, so I use QTimer to trigger an event after some delay. The event handler is the hideOnScreen method which hides the label and should clear the memory. The problem is the following: Whenever I call QPixmap::fromImage, it allocates the memory for the pixmap data and copies the data from QImage memory buffer to the QPixmap memory buffer. After the label is hidden, the QPixmap data still remains allocated, and even worse, after the new QPixmap::fromImage call the new chunk of memory is allocated for the new picture, and the old data is not freed from memory. This causes a memory leak (cca 10 MB per method call with my testing pictures). How can I solve that leak? I've even tried to create a private QPixmap variable, store pixmap created by the QPixmap::fromImage to it, and then tried to call its destructor in hideOnScreen method, but it didn't help. Is there a non-static way to create QPixmap from QImage? Or even better, is there a way to create QPixmap directly from IplImage* ? Thank you in advance for your answers.

    Read the article

  • Comparing two Oracle schema, other users

    - by Igman
    Hello! I have been tasked with comparing two oracle schema with a large number of tables to find the structural differences in the schema. Up until know I have used the DB Diff tool in Oracle SQL Developer, and it has worked very well. The issue is that now I need to compare tables in a user that I cannot log into , but I can see it through the other users section in SQL developer. The issue is that whenever I try to use the diff tool to compare those objects to the other schema it does not work. Does anyone have any idea how to do this? It would save me a very large amount of work. I have some basic SQL knowledge if that is whats needed. Thanks.

    Read the article

  • Good embedded database solution (like SQLite) for .Net

    - by vfilby
    I am looking for file based storage solutions that I can use with a .Net project. THey need to have a sql-like interface for storing and retrieving data. They need to have relatively little overhead and must not require any additional components installed by the end user. I am hopping for a .dll that I can reference and use. Cool points awarded if it is closely tied to an ORM. My current favourite is SQLite, are there any better ones out there that I should know about? I have a (health?) bias against access because I feel it is overcomplicated for what I need, I am open to being convinced otherwise though. PS: "No, there is nothing better than SQLite" is a perfectly good answer.

    Read the article

  • how to generate dbml file from Sybase database?

    - by 5YrsLaterDBA
    I think we may have trouble with our existing project. For some reasons we have to switch from SQL Server to Sybase SQL Anywhere 11. now we trying to find a way continue use our existing LINQ code. We wish we can still use L2S? If cannot, we wish we can use L2E, then we have to change to ADO. how to generate dbml file from Sybase Anywhere 11? after that can we use sqlmetal to generate .cs files?

    Read the article

  • Non-relational database modeling tool?

    - by Angel Escobedo
    Hey guys, please recommend some tools you have used succesfully on DW, DataMart, BI an non-relational modeling. Example for automatic creation of snow-flake Schemas, dimensions and facts tables. Wich tools makes you sense familiarity with the diagrams and surrogates keys and it will have the option for export or connect to SQL Server 2008. Thanks

    Read the article

  • Is it easy to switch from relational to non-relational databases with Rails?

    - by Tam
    Good day, I have been using Rails/Mysql for the past while but I have been hearing about Cassandra, MongoDB, CouchDB and other document-store DB/Non-relational databases. I'm planning to explore them later as they might be better alternative for scalability. I'm planning to start an application soon. Will it make a different with Rails design if I move from relational to non-relational database? I know Rails migrations are database-agnostic but wasn't sure if moving to non-relational will make difference with design or not.

    Read the article

  • database design to speed up hibernate querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID 1 1 2 2 1 17 3 1 63 4 1 64 5 1 65 6 1 95 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private Integer CoordinateID; /** * From Coordinate_id value */ @Column(name = "FROMCOORDID", nullable = false) private int fromCoordID; /** * To Coordinate_id value */ @Column(name = "TOCOORDID", nullable = false) private int toCoordID; //private Coordinate toCoordID; }

    Read the article

  • Database Design Primay Key, ID vs String

    - by LnDCobra
    Hi, I am currently planning to develop a music streaming application. And i am wondering what would be better as a primary key in my tables on the server. An ID int or a Unique String. Methods 1: Songs Table: SongID(int), Title(string), Artist*(string), Length(int), Album*(string) Genre Table Genre(string), Name(string) SongGenre: SongID*(int), Genre*(string) Method 2 Songs Table: SongID(int), Title(string), ArtistID*(int), Length(int), AlbumID*(int) Genre Table GenreID(int), Name(string) SongGenre: SongID*(int), GenreID*(int) Key: Bold = Primary Key, Field* = Foreign Key I'm currently designing using method 2 as I believe it will speed up lookup performance and use less space as an int takes a lot less space then a string. Is there any reason this isn't a good idea? Is there anything I should be aware of?

    Read the article

  • Need help to create database schema for wholesale online tee store

    - by techiepark
    Hi, I'm currently working on wholesale online t-shirt shop. I have done this for fixed quantity and price, and its working fine. Now i need to do this for variable quantity and price. Here is the reference link, like what i have to do. Basic tables i have created are - CREATE TABLE attribute ( attribute_id int(11) NOT NULL auto_increment, name varchar(100) NOT NULL, PRIMARY KEY (attribute_id) ); CREATE TABLE attribute_value ( attribute_value_id int(11) NOT NULL auto_increment, attribute_id int(11) NOT NULL, value varchar(100) NOT NULL, PRIMARY KEY (attribute_value_id), KEY idx_attribute_value_attribute_id (attribute_id) ); CREATE TABLE product ( product_id int(11) NOT NULL auto_increment, name varchar(100) NOT NULL, description varchar(1000) NOT NULL, price decimal(10,2) NOT NULL, image varchar(150) default NULL, thumbnail varchar(150) default NULL, PRIMARY KEY (product_id), FULLTEXT KEY idx_ft_product_name_description (name,description) ); CREATE TABLE product_attribute ( product_id int(11) NOT NULL, attribute_value_id int(11) NOT NULL, PRIMARY KEY (product_id,attribute_value_id) ); I'm not getting how to store the price based on variable quantity. Please help me to create product and its related tables. my requirement is same as above reference link.

    Read the article

  • Buddy List: Relational Database Table Design

    - by huntaub
    So, the modern concept of the buddy list: Let's say we have a table called Person. Now, that Person needs to have many buddies (of which each buddy is also in the person class). The most obvious way to construct a relationship would be through a join table. i.e. buddyID person1_id person2_id 0 1 2 1 3 6 But, when a user wants to see their buddy list, the program would have to check the column 'person1_id' and 'person2_id' to find all of their buddies. Is this the appropriate way to implement this kind of table, or would it be better to add the record twice.. i.e. buddyID person1_id person2_id 0 1 2 1 2 1 So that only one column has to be searched. Thanks in advance.

    Read the article

  • Database solutions for storing/searching EXIF data

    - by webdestroya
    I have thousands of photos on my site (each with a numeric PhotoID) and I have EXIF data (photos can have different EXIF tags as well). I want to be able to store the data effectively and search it. Some photos have more EXIF data than others, some have the same, so on.. Basically, I want to be able to query say 'Select all photos that have a GPS location' or 'All photos with a specific camera' I can't use MySQL (tried it, it doesn't work). I thought about Cassandra, but I don't think it lets me query on fields. I looked at SimpleDB, but I would rather: not pay for the system, and I want to be able to run more advanced queries on the data. Also, I use PHP and Linux, so it would be awesome if it could interface nicely to PHP. Any ideas?

    Read the article

  • Best way to auto-restore db on an houlry basis

    - by aron
    Hello, I have a demo site where anyone can login and test a management interface. Every hour I would like to flush all the data in the SQL 2008 Database and restore it from the original. Rae Gate sql has some awesome tools for this, however they are beyond my budget right now. Could I simply make a backup copy of the database's data file, then have a c# console app that deletes it and copies over the original. Then I can have a windows schedule task to run the .exe every hour. It's simple and free... would this work? I'm using SQL Server 2008 R2 Web edition I understand that red gate is technically better because I can set it to analyze the db and only update the records that were altered, and the approach I have above is like a "sledge hammer".

    Read the article

  • Database model for saving random boolean expressions

    - by zarko.susnjar
    I have expressions like this: (cat OR cats OR kitten OR kitty) AND (dog OR dogs) NOT (pigeon OR firefly) Anyone having idea how to make tables to save those? Before I got request for usage of brackets, I limited usage of operators to avoid ambiguous situations. So only ANDs and NOTs or only ORs and saved those in this manner: operators id | name 1 | AND 2 | OR 3 | NOT keywords id | keyword 1 | cat 2 | dog 3 | firefly expressions id | operator | keywordId 1 | 0 | 1 1 | 1 | 2 1 | 3 | 3 which was: cat AND dog NOT firefly But now, I'm really puzzled...

    Read the article

  • How do I normalise this database design?

    - by Ian Roke
    I am creating a rowing reporting and statistics system for a client where I have a structure at the moment similar to the following: ----------------------------------------------------------------------------- | ID | Team | Coaches | Rowers | Event | Position | Time | ----------------------------------------------------------------------------- | 18 | TeamName | CoachName1 | RowerName1 | EventName | 1 | 01:32:34 | | | | CoachName2 | RowerName2 | | | | | | | | RowerName3 | | | | | | | | RowerName4 | | | | ----------------------------------------------------------------------------- This is an example row of data but I would like to expand this out to a Rowers table and Coaches table and so on but I don't know how best to then link that back to the Entries table which is what this is. Has anybody got any words of wisdom they could share with me? Update A Team can have any number of Coaches and Rowers, a Rower can be in many Teams (Team A, B, C etc) and a Team can have many Coaches.

    Read the article

  • Database that consumes less disk space

    - by Hugo Palma
    I'm looking at solutions to store a massive quantity of information consuming the less possible disk space. The information structure is very simple and the queries will also be very simple. I've looked at solutions like Apache Cassandra and relations databases but couldn't find a comparison where disk usage is mentioned. Any ideas on this would be great.

    Read the article

  • Rating System Database Structure

    - by Harsha M V
    I have two entity groups. Restaurants and Users. Restaurants can be rated (1-5) by users. And rating fromeach user should be retrievable. Resturant(id, name, ..... , total_number_of_votes, total_voting_points ) User (id, name ...... ) Rating (id, restaurant_id, user_id, rating_value) Do i need to store the avg value so that it need not be calculated every time ? which table is the best place to store avg_rating, total_no_of_votes, total_voting_points ?

    Read the article

  • In Oracle 10g, how do I see tables in schemes other than system

    - by Yasir Arsanukaev
    In the hr schema there's a table emloyees, I can query data from it specifying it implicitly SELECT count(*) FROM hr.employees while logged in as system, but when I navigate to Home-Object Browser in the browser I can't see the table employees and other tables in other schemas. Can I manage tables in schemas other than system while logged in as system? Maybe there's some setting which enables me to group tables by schema. My Oracle Database version is 10.2.0.1 Express Edition. IIRC I could see all tables in Oracle Database 10.1.x.y. Thanks.

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >