Search Results

Search found 70568 results on 2823 pages for 'ef database first'.

Page 204/2823 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • VirtualBox Port Fowarding to Connect to PostgreSQL Database

    - by kliao
    I'm trying to connect to a PostgreSQL database hosted on a Win7 guest from a Win7 host. I've configured security in pg_hba.conf host all all 127.0.0.1/32 md5 host all all 10.0.2.15/32 md5 host all all 192.168.1.6/32 md5 and set the listen_addresses setting in postgresql.conf to '*'. I think I've set up port forwarding correctly as I see: Key: VBoxInternal/Devices/e1000/0/LUN#0/Config/win7_vm1/GuestPort, Value: 5432 Key: VBoxInternal/Devices/e1000/0/LUN#0/Config/win7_vm1/HostPort, Value: 5432 Key: VBoxInternal/Devices/e1000/0/LUN#0/Config/win7_vm1/Protocol, Value: TCP when I call getextradata. This is similar to http://serverfault.com/questions/106168/cant-connect-to-postgresql-on-virtualbox-guest but I'm not sure what I'm doing wrong. In the vbox.log file I see: 00:00:01.019 NAT: set redirect TCP host port 5432 = guest port 5432 @ 10.0.2.15 00:00:01.033 NAT: failed to redirect TCP 5432 = 5432 but I'm not sure how to fix that. Any ideas? Thanks.

    Read the article

  • Nested Entities and calculation on leaf entity property - SQL or NoSQL approach

    - by Chandu
    I am working on a hobby project called Menu/Recipe Management. This is how my entities and their relations look like. A Nutrient has properties Code and Value An Ingredient has a collection of Nutrients A Recipe has a Collection of Ingredients and occasionally can have a collection of other recipes A Meal has a Collection of Recipes and Ingredients A Menu has a Collection of Meals The relations can be depicted as In one of the pages, for a selected menu I need to display the effective nutrients information calculated based on its constituents (Meals, Recipes, Ingredients and the corresponding nutrients). As of now am using SQL Server to store the data and I am navigating the chain from my C# code, starting from each meal of the menu and then aggregating the nutrient values. I think this is not an efficient way as this calculation is being done every time the page is requested and the constituents change occasionally. I was thinking about a having a background service that maintains a table called MenuNutrients ({MenuId, NutrientId, Value}) and will populate/update this table with the effective nutrients when any of the component (Meal, Recipe, Ingredient) changes. I feel that a GraphDB would be a good fit for this requirement, but my exposure to NoSQL is limited. I want to know what are the alternative solutions/approaches to this requirement of displaying the nutrients of a given menu. Hope my description of the scenario is clear.

    Read the article

  • Lingering database-connections from Feng Office

    - by Bobby
    I've installed Feng Office on our main server which is working perfectly so far. Unfortunately it seems like there's a problem with the connection to the MySQL-Database. While the connection itself works fine, it's the reuse/pooling of connections which seems to be bugged. There are lingering/sleeping connections to the server from Feng Office which won't close and don't get reused after some time (120 seconds). Of course those lingering processes/connections are piling up pretty fast. I've found a thread at the forums about this behavior, but the suggested fix is already applied (by default). I'm sure this is just a configuration issue, but I'm a little clue less because Feng is besides a MediaWiki, a DokuWiki and homebrewed PHP applications the only one with this issue. The setup is a Microsoft Windows 2003 Server with MySQL 5.0.26 and Apache 2.2. Where can I start looking for clues why this is happening and how do I get rid of lingering MySQL-Connections?

    Read the article

  • How to store prices that have effective dates?

    - by lal00
    I have a list of products. Each of them is offered by N providers. Each providers quotes us a price for a specific date. That price is effective until that provider decides to set a new price. In that case, the provider will give the new price with a new date. The MySQL table header currently looks like: provider_id, product_id, price, date_price_effective Every other day, we compile a list of products/prices that are effective for the current day. For each product, the list contains a sorted list of the providers that have that particular product. In that way, we can order certain products from whoever happens to offer the best price. To get the effective prices, I have a SQL statement that returns all rows that have date_price_effective >= NOW(). That result set is processed with a ruby script that does the sorting and filtering necessary to obtain a file that looks like this: product_id_1,provider_1,provider_3,provider8,provider_10... product_id_2,provider_3,provider_2,provider1,provider_10... This works fine for our purposes, but I still have an itch that a SQL table is probably not the best way to store this kind of information. I have that feeling that this kind of problema has been solved previously in other more creative ways. Is there a better way to store this information other than in SQL? or, if using SQL, is there a better approach than the one I'm using?

    Read the article

  • choosing the right RAID level for PostgresQL database

    - by Sergey
    Hi, I got an disk array appliance of 8 disks 1T each (UltraStor RS8IP4). It will be used solely by PostgresQL database and I am trying to choose the best RAID level for it. The most priority is for read performance since we operate large data sets (tables, indexes) and we do lots of searches/scans. With the old disks that we have now the most slowdowns happen on SELECTs. Fault tolerance is less important, it can be 1 or 2 disks. Space is the least important factor. Even 1T will be enough. Which RAID level would you recommend in this situation. The current options are 60, 50 and 10, but probably other options can be even better.

    Read the article

  • Should a primary key be immutable?

    - by Vincent Malgrat
    A recent question on stackoverflow provoked a discussion about the immutability of primary keys. I had thought that it was a kind of rule that primary keys should be immutable. If there is a chance that some day a primary key would be updated, I thought you should use a surrogate key. However it is not in the SQL standard and some RDBMS' "cascade update" feature allows a primary key to change. So my question is: is it still a bad practice to have a primary key that may change ? What are the cons, if any, of having a mutable primary key ?

    Read the article

  • How SSD hard drive affected speed of your website (asp.net/linq/ms sql database)

    - by Sergey Osypchuk
    I have a small database (<1G) But we have a lot of complex logi? in website and client complains on render time, which is 3-5 seconds. We are not google, and thousands of users a day is our dream, so size is not a problem, but speed is important. Can anybody share with experience with SSD drives for ASP.NET (MVC)/LINQ/MS SQL based application ? How you performance increased? UPDATE: this whitepaper states that it will be 20 times faster. http://www.texmemsys.com/files/f000174.pdf

    Read the article

  • zero downtime during database scheme upgrade on SQL 2008

    - by eject
    I have web application on IIS7 with SQL server 2008 as RDBMS. Need get 0 downtime during future upgrades of ASP.NET code and DB schema as well. I need to get right scenario for this. I have 2 web servers and 2 sql servers and one http load balancer whcih allows to switch web backend server for web requests. Main goal is to make 1st web server and DB server up and running, update code and db schema on 2nd server and then switch all the requests to 2nd server and then main problem - how to copy data from 1st database 2nd (which was changed during upgrade).

    Read the article

  • Serialized values or separate table, which is more efficient?

    - by Aravind
    I have a Rails model email_condition_string with a word column in it. Now I have another model called request_creation_email_config with the following columns admin_filter_group:references vendor_service:references email_condition_string:references email_condition_string has many request_creation_email_config and request_creation_email_config belongs to email_condition_string. Instead of this model a colleague of mine is suggesting that strong the word inside the same model as comma separated values is efficient than storing as a separate model. Is that alright?

    Read the article

  • Implementing new required feature after software release

    - by TiagoBrenck
    Fake Scenario There is a software that was released 1 year ago. The software is to map and register all kind of animals on our planet. When the software was released, the client only needed to know the scientific name of the animal, a flag if it is in risk of extinction and a scale of dangerous(that is a fake software and specification, I don't want to discuss this here). There are already 100.000 animals records saved on DB. New Feature One year later, the client wants a new feature. It is really important to him to know the animals classes, and this is a required field. So he asks me to put a field to input the animal class, and this field is required. Or maybe where this animal was discovered. Problem I have already 100.000 recorded animals without a class or where it was discovered, but I need to insert a new column to storage this information and this column can't be null. I don't have a default value for this situation (there isn't a default animal class or where it was discovered). I don't want to keep the requirement rule only on my software, my DB must have this requirement too(I like to keep business rules on DB too). What are the alternatives to solve this situation? I am on a situation that this new feature cannot be previewed or reviewed for the existing records. The time already passed and I can't go back on time to get it

    Read the article

  • Does my approach for building a real time monitoring system make sense? [closed]

    - by sameer
    I am developing an application that will display a dashboard that will display data from different SQL databases. This needs to happen in almost real time, our refresh time is about 5 minutes. My approach so far is: Develop a Windows service to accumulate the data from various SQL Server instances. Persist those details into a SQL DB, from which the dashboard will display them on the web page. Trigger fetching of data from the Windows service will every x minutes. The details of the SQL Server instances will be stored in the SQL DB which the Windows service will be referring. Does my approach make sense?

    Read the article

  • Entity Framework 6: Alpha2 Now Available

    - by ScottGu
    The Entity Framework team recently announced the 2nd alpha release of EF6.   The alpha 2 package is available for download from NuGet. Since this is a pre-release package make sure to select “Include Prereleases” in the NuGet package manager, or execute the following from the package manager console to install it: PM> Install-Package EntityFramework -Pre This week’s alpha release includes a bunch of great improvements in the following areas: Async language support is now available for queries and updates when running on .NET 4.5. Custom conventions now provide the ability to override the default conventions that Code First uses for mapping types, properties, etc. to your database. Multi-tenant migrations allow the same database to be used by multiple contexts with full Code First Migrations support for independently evolving the model backing each context. Using Enumerable.Contains in a LINQ query is now handled much more efficiently by EF and the SQL Server provider resulting greatly improved performance. All features of EF6 (except async) are available on both .NET 4 and .NET 4.5. This includes support for enums and spatial types and the performance improvements that were previously only available when using .NET 4.5. Start-up time for many large models has been dramatically improved thanks to improved view generation performance. Below are some additional details about a few of the improvements above: Async Support .NET 4.5 introduced the Task-Based Asynchronous Pattern that uses the async and await keywords to help make writing asynchronous code easier. EF 6 now supports this pattern. This is great for ASP.NET applications as database calls made through EF can now be processed asynchronously – avoiding any blocking of worker threads. This can increase scalability on the server by allowing more requests to be processed while waiting for the database to respond. The following code shows an MVC controller that is querying a database for a list of location entities:     public class HomeController : Controller     {         LocationContext db = new LocationContext();           public async Task<ActionResult> Index()         {             var locations = await db.Locations.ToListAsync();               return View(locations);         }     } Notice above the call to the new ToListAsync method with the await keyword. When the web server reaches this code it initiates the database request, but rather than blocking while waiting for the results to come back, the thread that is processing the request returns to the thread pool, allowing ASP.NET to process another incoming request with the same thread. In other words, a thread is only consumed when there is actual processing work to do, allowing the web server to handle more concurrent requests with the same resources. A more detailed walkthrough covering async in EF is available with additional information and examples. Also a walkthrough is available showing how to use async in an ASP.NET MVC application. Custom Conventions When working with EF Code First, the default behavior is to map .NET classes to tables using a set of conventions baked into EF. For example, Code First will detect properties that end with “ID” and configure them automatically as primary keys. However, sometimes you cannot or do not want to follow those conventions and would rather provide your own. For example, maybe your primary key properties all end in “Key” instead of “Id”. Custom conventions allow the default conventions to be overridden or new conventions to be added so that Code First can map by convention using whatever rules make sense for your project. The following code demonstrates using custom conventions to set the precision of all decimals to 5. As with other Code First configuration, this code is placed in the OnModelCreating method which is overridden on your derived DbContext class:         protected override void OnModelCreating(DbModelBuilder modelBuilder)         {             modelBuilder.Properties<decimal>()                 .Configure(x => x.HasPrecision(5));           } But what if there are a couple of places where a decimal property should have a different precision? Just as with all the existing Code First conventions, this new convention can be overridden for a particular property simply by explicitly configuring that property using either the fluent API or a data annotation. A more detailed description of custom code first conventions is available here. Community Involvement I blogged a while ago about EF being released under an open source license.  Since then a number of community members have made contributions and these are included in EF6 alpha 2. Two examples of community contributions are: AlirezaHaghshenas contributed a change that increases the startup performance of EF for larger models by improving the performance of view generation. The change means that it is less often necessary to use of pre-generated views. UnaiZorrilla contributed the first community feature to EF: the ability to load all Code First configuration classes in an assembly with a single method call like the following: protected override void OnModelCreating(DbModelBuilder modelBuilder) {        modelBuilder.Configurations            .AddFromAssembly(typeof(LocationContext).Assembly); } This code will find and load all the classes that inherit from EntityTypeConfiguration<T> or ComplexTypeConfiguration<T> in the assembly where LocationContext is defined. This reduces the amount of coupling between the context and Code First configuration classes, and is also a very convenient shortcut for large models. Other upcoming features coming in EF 6 Lots of information about the development of EF6 can be found on the EF CodePlex site, including a roadmap showing the other features that are planned for EF6. One of of the nice upcoming features is connection resiliency, which will automate the process of retying database operations on transient failures common in cloud environments and with databases such as the Windows Azure SQL Database. Another often requested feature that will be included in EF6 is the ability to map stored procedures to query and update operations on entities when using Code First. Summary EF6 is the first open source release of Entity Framework being developed in CodePlex. The alpha 2 preview release of EF6 is now available on NuGet, and contains some really great features for you to try. The EF team are always looking for feedback from developers - especially on the new features such as custom Code First conventions and async support. To provide feedback you can post a comment on the EF6 alpha 2 announcement post, start a discussion or file a bug on the CodePlex site. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • SQL language drawbacks, The Third Manifesto

    - by David Portabella
    Sometime ago I read about SQL language drawbacks (the basic language specification, not vendor specific), and one of the drawbacks was that the language does not allow to create a set of tuples that don't come from a table. For instance, SELECT firstName, lastName from people; this creates a set of tuples coming from the table people. Now, if I don't have this table people, and I want to return a constant, I'd need something like this to return a set of two tuples (this would not require to have a table): SELECT VALUES('james', 'dean'), ('tom', 'cruisse'); Why I would need that? Because of the same reasons that we can define constants (not only basic types, but objects and arrays also) in any advanced programming language. Workarounds, Yes, I could create a temporal table, fill the data, and SELECT from that table. This is a hack, to overcome the drawbacks of the poor SQL language. I think that I read about this somewhere in "The Third Manifesto", but I don't find the paragraph/example talking about this concrete drawback anymore. Do you know a reference about it?

    Read the article

  • Master Data Management

    - by Logicalj
    I am looking for a very flexible, easy to integrate and dynamic application with as many features as possible for Master Data Management. As Master Data Management is used to Manage Operational Data, Analytical Data and Master Data so, I want guidance about "What is exactly expected from Master Data Management and What are the Basic and Challenging Scenarios to be covered or resolved in Master Data Management". Please guide me with all the possible aspects of Master Data Management like Data Cleansing, Data Management and Start Data Analyzing, etc.

    Read the article

  • Speaking - 24 Hours of PASS, Summit Preview Edition

    - by AllenMWhite
    There's so much to learn to be effective with SQL Server, and you have an opportunity to immerse yourselves in 24 hours of free technical training this week from PASS, via the 24 Hours of PASS event. I'll be presenting an introductory session on PowerShell called PowerShell 101 for the SQL Server DBA . Here's the abstract: The more you have to manage, the more likely you'll want to automate your processes. PowerShell is the scripting language that will make you truly effective at managing lots of...(read more)

    Read the article

  • There is a problem with the Office database

    - by RomanT
    After a TimeMachine restore; office 2011 is having kittens over permissions it would seem. Having attempted a 'repair' out of Disk Utility, am still seeing: 'there is a problem with the Office database' upon startup. After which Word/Excel work without issues. Outlook on the other hand won't even start. Given the obvious message here "You do not have write access to the Outlook application folder" - where is the DB located to check?! Ideas ? Thank you

    Read the article

  • Need advice concerning Feature Based Development when knowledge DB is involved

    - by voroninp
    We develop BackOffice application which is used to edit our knowledge DB. Now our main product's development team is shifting to the feature based development and we need to support several DB's with not identical data schemes. (DS changes slightly from DB to DB) The information from knowledge Db is extracted by the script and then is distributed to the clients. We also need to support merging these DB's. We now analyze pros and cons of different approaches. We discuss this one: One working DB (WDB) with one DB for each feature branch (FDB). The approved data is moved from WDB to FDB. So we need to support only one script for each branch. This script will extract data from corresponding FDB. Nevertheless we are to code the differences between FDBs and WDB manually. May be some automatic mapping tools exist? I also wish to know whether classic solutions to the alike problems already exist. Can anyone share the best practices for this case?

    Read the article

  • self referencing tables, good or bad?

    - by NimChimpsky
    Representing geographical locations within an application, the design of the underlying data model suggests two clear options (or maybe more?). One table with a self referencing parent_id column uk - london (london parent id = UK id) or two tables, with a one to many relationship using a foreign key. My preference is for one self-refercing table as it easily allows to extend into as many sub regions as required. IN general do people veer away from self referencing tables, or are they A-OK ?

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • Download or view a servers wins database

    - by Segfault
    I am trying to troubleshoot a WINS browsing problem in a Server 2008 AD Forest. I am in one domain and the problem is with a sibling domain. What command can i use to dump or view the WINS database on a particular AD server by name, in a different domain than me? I thought one of the subcommands of net would have an option for this, but I can't find it. I also tried browstat.exe getblist but it gives me an error message "The list of servers for this workgroup is not currently available". I am not a domain admin and don't have any rights to the either domain other than a normal user. Anyone know how this can be done?

    Read the article

  • EAV - is it really bad in all scenarios?

    - by Giedrius
    I'm thinking to use EAV for some of the stuff in one of the projects, but all questions about it in stackoverflow end up to answers calling EAV an anti pattern. But I'm wondering, if is it that wrong in all cases? Let's say shop product entity, it has common features, like name, description, image, price, etc., that take part in logic many places and has (semi)unique features, like watch and beach ball would be described by completely different aspects. So I think EAV would fit for storing those (semi)unique features? All this is assuming, that for showing product list, it is enough info in product table (that means no EAV is involved) and just when showing one product/comparing up to 5 products/etc. data saved using EAV is used. I've seen such approach in Magento commerce and it is quite popular, so may be there are cases, when EAV is reasonable?

    Read the article

  • Database or website of kernel config files ?

    - by Kami
    I've experienced some kernel panic after trying to compile gentoo kernel for a Sun UltraSPARC T5120 Server. The kernel panic came from a missing support for the SAS disk controller in the menu config. I've wasted so much time because I had no clue about the hardware I was using. I know that the kernel config depends on what you plan to do with your machine but I want to have a configuration file that at least match my hardware ! Is there a website or database that provides menuconfig's kernel configuration files for known or branded hardware like Dell Server or Apple computers ?

    Read the article

  • Is OpenStack suitable as a fault tolerant DB host?

    - by Jit B
    I am trying to design a fault tolerant DB cluster (schema does not matter) that would not require much maintenance. After looking at almost everything from MySQL to MongoDB to HBase I still find that no DB is easily scalable - Cassandra comes close but it has its own set of problems. So I was thinking what if I run something like MySQL or OrientDB on top of a large openstack VM. The VM would be fault tolerant by itself so I dont need to do it st DB level. Is it viable? Has it been done before? If not then what are the possible problems with this approach?

    Read the article

  • Upload large database SQL file

    - by Devy
    I've a database of more than 20Gb of size on my hard disk. What is the best way to upload it with the least (money) load possible on the server? - I'm on Windows 7. - I have FTP and SSH access on the server. I avoid using FTP because my connection cuts off a lot, I can't imagine I re-upload again the file after failing on 99%. I found some tools that split the large .sql file to small .sql files, but they didn't mention how to gather these files again into one file. Another way is to archive the big .sql file to .rar with -v option, upload them through FTP then unpack them. But unpacking will also cost, right? I know it will cost in any cases, but any best practice will be strongly appreciated.

    Read the article

  • Pommes für alle?

    - by A&C Redaktion
    Ja, liebe Partner - wie Sie sich und Ihre Kunden vor ungewollten Zugriffen schützen, dazu gibt es nun einen charmanten Video-Clip, der in nur einer Minute den Sprung von den Pommes zur Oracle Access Management Suite schafft. Eine spielerische Hinführung zum Thema Zugriffsrechte, die sich mit ihrem gelungenen Überraschungseffekt auch hervorragend im Kundengespräch nutzen lässt. Gleich anschauen, „gefällt mir“ klicken - weiterempfehlen und verlinken! Weiterführende Informationen zum Access Management Portfolio sind online verfügbar:http://www.oracle.com/us/products/middleware/identity-management/access-management/overview/index.htmlAuch auf die derzeit am Markt besprochenen Themen zu Mobile&Social hat Oracle eine neue Antwort:http://www.oracle.com/technetwork/middleware/id-mgmt/overview/oamms-1696162.htmlEin weiteres sehenswertes Video finden Sie hier:http://www.oracle.com/us/products/middleware/identity-management/oiam/overview/index.html

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >