Search Results

Search found 34093 results on 1364 pages for 'database architecture'.

Page 407/1364 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • 2 min video about the SQL_Compare

    - by CatherineRussell
    It is nice to start blogging again! I am working on new project in a small company now. We do not have a full time database admin. I have to cover multiple roles: getting requirements, writing docs and creating diagrams, designing app, writing code, testing and DBA role. I am not a DBA. But, I have to do day to day database changes: adding new new columns and tables. Check out 2 min video about the SQL_Compare. This tool saves time by automatically comparing and synchronizing database schemas; eliminate mistakes migrating database changes from dev, to test, to production; speed up the deployment of new database schema updates; generate T-SQL scripts to update one database to match the schema of another; find and fix errors caused by differences between databases;  keeps an accurate history of all previous database records.  http://www.red-gate.com/products/SQL_Compare/index.htm

    Read the article

  • Big Data – Buzz Words: What is HDFS – Day 8 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is MapReduce. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – HDFS. What is HDFS ? HDFS stands for Hadoop Distributed File System and it is a primary storage system used by Hadoop. It provides high performance access to data across Hadoop clusters. It is usually deployed on low-cost commodity hardware. In commodity hardware deployment server failures are very common. Due to the same reason HDFS is built to have high fault tolerance. The data transfer rate between compute nodes in HDFS is very high, which leads to reduced risk of failure. HDFS creates smaller pieces of the big data and distributes it on different nodes. It also copies each smaller piece to multiple times on different nodes. Hence when any node with the data crashes the system is automatically able to use the data from a different node and continue the process. This is the key feature of the HDFS system. Architecture of HDFS The architecture of the HDFS is master/slave architecture. An HDFS cluster always consists of single NameNode. This single NameNode is a master server and it manages the file system as well regulates access to various files. In additional to NameNode there are multiple DataNodes. There is always one DataNode for each data server. In HDFS a big file is split into one or more blocks and those blocks are stored in a set of DataNodes. The primary task of the NameNode is to open, close or rename files and directory and regulate access to the file system, whereas the primary task of the DataNode is read and write to the file systems. DataNode is also responsible for the creation, deletion or replication of the data based on the instruction from NameNode. In reality, NameNode and DataNode are software designed to run on commodity machine build in Java language. Visual Representation of HDFS Architecture Let us understand how HDFS works with the help of the diagram. Client APP or HDFS Client connects to NameSpace as well as DataNode. Client App access to the DataNode is regulated by NameSpace Node. NameSpace Node allows Client App to connect to the DataNode based by allowing the connection to the DataNode directly. A big data file is divided into multiple data blocks (let us assume that those data chunks are A,B,C and D. Client App will later on write data blocks directly to the DataNode. Client App does not have to directly write to all the node. It just has to write to any one of the node and NameNode will decide on which other DataNode it will have to replicate the data. In our example Client App directly writes to DataNode 1 and detained 3. However, data chunks are automatically replicated to other nodes. All the information like in which DataNode which data block is placed is written back to NameNode. High Availability During Disaster Now as multiple DataNode have same data blocks in the case of any DataNode which faces the disaster, the entire process will continue as other DataNode will assume the role to serve the specific data block which was on the failed node. This system provides very high tolerance to disaster and provides high availability. If you notice there is only single NameNode in our architecture. If that node fails our entire Hadoop Application will stop performing as it is a single node where we store all the metadata. As this node is very critical, it is usually replicated on another clustered as well as on another data rack. Though, that replicated node is not operational in architecture, it has all the necessary data to perform the task of the NameNode in the case of the NameNode fails. The entire Hadoop architecture is built to function smoothly even there are node failures or hardware malfunction. It is built on the simple concept that data is so big it is impossible to have come up with a single piece of the hardware which can manage it properly. We need lots of commodity (cheap) hardware to manage our big data and hardware failure is part of the commodity servers. To reduce the impact of hardware failure Hadoop architecture is built to overcome the limitation of the non-functioning hardware. Tomorrow In tomorrow’s blog post we will discuss the importance of the relational database in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • One Week To Go: OTN Architect Day: Cloud Computing

    - by Bob Rhubart
    One week remains until OTN Architect Day: Cloud Computing kicks of at the spectacular Oracle HQ campus in Redwood Shores, CA. The event is free, and there is still time to register. When: Tuesday July 9, 2013 8:30am - 12:30pm Where: Oracle Conference Center350 Oracle Pkwy Redwood City, CA 94065 Register now. It's free! Here's the latest update to the event agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Markus Michalewicz Senior Principal Product Manager Oracle Real Application Clusters (RAC) New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Markus Michalewicz respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • One Week To Go: OTN Architect Day: Cloud Computing

    - by Bob Rhubart
    One week remains until OTN Architect Day: Cloud Computing kicks of at the spectacular Oracle HQ campus in Redwood Shores, CA. The event is free, and there is still time to register. When: Tuesday July 9, 2013 8:30am - 12:30pm Where: Oracle Conference Center350 Oracle Pkwy Redwood City, CA 94065 Register now. It's free! Here's the latest update to the event agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Markus Michalewicz Senior Principal Product Manager Oracle Real Application Clusters (RAC) New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Markus Michalewicz respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • ArchBeat Link-o-Rama Top 20 for June 17-23, 2012

    - by Bob Rhubart
    The most popular item shared via my social networks for the week of June 17-23, 2012. Simple Made Easy | Rich Hickey InfoQ: Current Trends in Enterprise Mobility Cloud Bursting between AWS and Rackspace | High Scalability A Distributed Access Control Architecture for Cloud Computing Congrats to @MNEMONIC01 on his new book: Oracle WebLogic Server 12c: First Look BI Architecture Master Class for Partners - Oracle Architecture Unplugged Guide to integration architecture | Stephanie Mann Oracle Data Integrator 11g - Faster Files | David Allan Recap: EMEA User Group Leaders Meeting Latvia May 2012 Starting a cluster | Mark Nelson SOA, Cloud & Service Technology Symposium 2012 London - Special Oracle Discount Enterprise 2.0 Conference: Building Social Business | Oracle WebCenter Blog FY13 Oracle PartnerNetworkr Kickoff - Tues June 26, 2012 | @oraclepartners Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6? | Ricardo Ferreira Call for Nominations: Oracle Fusion Middleware Innovation Awards 2012 - Win a free pass to #OOW12 Hibernate4 and Coherence | Rene van Wijk Eclipse and Oracle Fusion Development - Free Virtual Event, July 10th Why building SaaS well means giving up your servers | GigaOM Personas - what, why & how | Mascha van Oosterhout Oracle Public Cloud Architecture | Tyler Jewell Thought for the Day "There is only one thing more painful than learning from experience and that is not learning from experience." — Archibald McLeish (May 7, 1892 – April 20, 1982) Source: SoftwareQuotes.com

    Read the article

  • Disaster Recovery Discovery

    - by Rodney Landrum
    Last weekend I joined several of my IT staff on a mission to perform a DR test in our remote CoLo center in a large South East city of the US. Can I be more obtuse? The goal was simple for me as the sole DBA in a throng of Windows, Storage, Network and SAN admins – restore the databases and make them work. There were 4 applications that back ended to 7 SQL Server databases on 4 different SQL Server instances. We would maintain the original server names, but beyond that it was fair game. We had time to prepare so I was able to script out or otherwise automate the recovery process. I used sp_help_revlogin for three of the servers, a bit of a cheat actually because restoring the Master database on the target DR servers was the specified course of action according to the DR procedures ( the caveat “IF REQUIRED” left it open to interpretation. I really wanted to avoid the step of restoring Master for a number of reasons but mainly because I did not want to deal with issues starting SQL Services afterward. Having to account for the location of TempDB and the version conflicts of the resource DBs were just two of the battles I chose not to fight. Not to mention other system database location problems that might arise and prevent SQL from starting.  I was going to have to restore all of the user databases anyway, so I would not really gain any benefit, outside of logins, for taking the time to restore the source Master database over the newly installed one on the fresh server. What I wanted was the ability to restore the Master database as a user database, call it Master_Mine, from a backup on the source system and then use that restored database to script the SQL Logins and passwords on the DR systems. While I did not attempt this on the trip, the thought stuck in my mind and this past week I succeeded at scripting user accounts and passwords using only a restored copy of the Master database. Granted there were several challenges to overcome.  Also, as is usual for any work like this the usual disclaimers apply:  This is not something that I would imagine Microsoft would condone or support and this was really only an experiment for me to learn if it was even possible. While I have tested the process with success, I do not know that I would use this technique in a documented procedure because future updates for SQL Server will render this technique non-functional. I thought at first, incorrectly of course, that I could use sp_help_revlogin on a restored copy of the master database I named Master_Mine.   Since sp_help_revlogin uses system schema objects, sys.syslogins and sys.server_principals, this was not going to work because all results would come from the main Master database. To test this I added a SQL login via SSMS, backed up Master, restored  it as Master_Mine, and then deleted the login.  Even though the test account I created should presumably still be in the Master_Mine database, I should be able to get to it and script out its creation with its password hash so that I would not need to know the password, but any applications that stored that password would not have to be altered in the DR scenario. They would just work as expected. Once I realized that would not work I began looking deeper.  Knowing that sys.syslogins and sys.server_principals are system views, their underlying code should be available with sp_helptext, right? They were. And this led me to discover the two tables sys.sysxlgns and sys.sysprivs, where the data I needed was stored. These tables existed in both the real Master and the restored copy, Master_Mine.  I used this information to tweak the sp_help_revlogin stored procedure to use these tables instead to create the logins cursor used in sp_help_revlogin. For the password hash,  sp_help_revlogin uses the function LoginProperty() which takes a user name and option ‘passwordhash’ to return the hash for the user. Unfortunately, it requires the login to exist in the Master database. This would not work. So another slight modification I had to make was to pull the password hash itself (pwdhash from sys.sysxlgns) into the logins cursor and comment out the section of sp_help_revlogin that uses LoginProperty. Instead, I pass the pwdhash value as the variable @PWD_varbinary to the sp_hexadecimal stored procedure which is also created by and used within the code provided by Microsoft in the link above for sp_help_revlogin. The final challenge: sys.sysxlgns and sys.server_principals are visible only within a Dedicated Administrator Connection (DAC) query window in SSMS or within SQLCDMD.  To open a DAC connection you have to be logged in on the SQL Server itself, via RDP in my case,  and you preface the server name in the query connection with ADMIN:, so that the server connection looks like ADMIN:ServerName. From there you can create the modified stored procedure in the restored copy of a Master database from a source system as whatever name you like, and then run the modified stored procedure. I named my new stored procedure usp_help_revlogin_MyMaster. Upon execution I was happy to see the logins and password hashes that I needed to apply from the source Master database without having to restore over the new Master system database and without the need to access the original server (assuming it was down due to whatever disaster put it in that state). You will note that I am not providing full code samples here of the modifications. I will say that it was a slight bit of work and anyone who needed to do this for whatever reason, could fairly easily roll their own solution with the information provided herein.  My goal, as I said was to prove that this could be done and provide another option if required to ease the burden of getting SQL Servers up and available in an emergency situation where alternatives may be more challenging or otherwise unavailable.  

    Read the article

  • The Fantastic New WebLogic on Oracle Database Appliance 2.9 Release is Here!

    - by JuergenKress
    Last week was a big day in virtualised ODA-land as it saw the launch of WebLogic on ODA 2.9. Admittedly it doesn't sound like a very exciting release but it is one that we at O-box have been looking forward to for quite some time. Let me explain why, then we'll look into the details... The ODA X4-2 has 48 Intel Xeon cores. That is a lot of compute power. Whilst the largest O-box SOA Appliance single environment configuration can in theory use all those cores (currently with 40 vCPU of SOA!) the vast majority of O-box users will want smaller configurations. Prior to 2.9 the Oracle WebLogic implementation only supported one domain per ODA, so the conundrum O-box development faced last year was either: offer customers only one SOA environment on their O-box for now (but have the benefit of a standard, easily supportable WebLogic installation), or build our own WebLogic/OTD OVM templates from scratch. One of our driving goals with O-box is to give the best possible experience and make the appliance as supportable as possible. Therefore we took the gamble that we would stick with the Oracle's one-domain WebLogic configuration initially, and just hope that it would deliver multi-domain support for us in a timely manner (note: this is probably not a strategy that business textbooks would recommend!). Anyway, we've been working closely with Oracle Product Management for a few months now and I'm delighted to see 2.9 as the fruits of their labour. This also neatly ties in with several recent requests for O-box to include OSB as well as SOA/BPEL (which we have always wanted to have in separate domains). The diagram below is the neatest way to summarise what the new 2.9 release will allow us to deliver, i.e. previously only one 3D box was possible: Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: oBox,WebLogic on ODA,ODA,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-21

    - by Bob Rhubart
    Software Architects Need Not Apply | Dustin Marx "I think there is a place for software architecture," says Dustin Marx, "but a portion of our fellow software architects have harmed the reputation of the discipline." For another angle on this subject, check out Out of the Tower, Into the Trenches from the Nov/Dec edition of Oracle Magazine. Oracle Data Integrator 11g - Faster Files | David Allan David Allan illustrates "a big step for regular file processing on the way to super-charging big data files using Hadoop." 2012 Oracle Fusion Middleware Innovation Awards - Win a FREE Pass to Oracle OpenWorld 2012 in SF Share your use of Oracle Fusion Middleware solutions and how they help your organization drive business innovation. You just might win a free pass to Oracle Openworld 2012 in San Francisco. Deadline for submissions in July 17, 2012. WLST Domain creation using dry-run | Michel Schildmeijer What to do "if you want to browse through your domain to check if settings you want to apply satisfy your requirements." Cloud opens up new vistas for service orientation at Netflix | Joe McKendrick "Many see service oriented architecture as laying the groundwork for cloud. But at one well-known company, cloud has instigated the move to SOA." How to avoid the Portlet Skin mismatch | Martin Deh Detailed how-to from WebCenter A-Team blogger Martin Deh. Internationalize WebCenter Portal - Content Presenter | Stefan Krantz Stefan Krantz explains "how to get Content Presenter and its editorials to comply with the current selected locale for the WebCenter Portal session." Oracle Public Cloud Architecture | Tyler Jewell Tyler Jewell discusses the multi-tenancy model and elasticity solution implemented by Oracle Cloud in this QCon presentation. A Distributed Access Control Architecture for Cloud Computing The authors of this InfoQ article discuss a distributed architecture based on the principles from security management and software engineering. Thought for the Day "Let us change our traditional attitude to the construction of programs. Instead of imagining that our main task is to instruct a computer what to to, let us concentrate rather on explaining to human beings what we want a computer to do." — Donald Knuth Source: Quotes for Software Engineers

    Read the article

  • How to make Unit Tests to make sure stored procedure is deleting row from the database?

    - by aspdotnetuser
    I'm new to unit testing and I need some help with the following. I have created a small project to help me learn how to make Unit Tests. The functionality for one of the forms in my application deletes a user from the User table (and other rows in mapping tables). Currently, the unit test I have created to test this sets up the required objects and then calls the business rules method (passing in the user id) which calls the data access method to execute the stored procedure that deletes the rows in the tables. Is this the correct method to test whether something is being deleted successfully? Should the unit test / setup method first insert some test data which the unit test then deletes?

    Read the article

  • AD-DirectoryServices: .NET2.0 - Speaking architecture, approach and best practices... Suggestions?

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. I have been able to add groups to the AD; I have been able to add users to the AD; The created user is automatically disabled? I seem to get confused with the use of a LDAP path to add objects. For instance, one needs two instances of a System.DirectoryServices.DirectoryEntry class to add a group, for instance. Why this? Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

  • Keeping a database structure up to date in a project where code is on subversion?

    - by Bruno De Barros
    I have been working with Subversion for a while now, and it's been incredible for the management of my projects, and even to help managing the deployment to several different servers, but there is just the one thing that still annoys me. Whenever I make any changes to the database structure, I need to update every server manually, I have to keep track of any changes I made, and because some of my servers run branches of the project (modifications that are still being worked on, or were made for different purposes), it's a bit awkward. Until now, I've been using a "database.sql" file, which is a dump of the database structure for a specific revision. But it just seems like such a bad way to manage this. And I was wondering, how does everyone else manage their MySQL databases when they're working on a project and using Subversion?

    Read the article

  • How can I split a PHP form into two sections, before submission to a database?

    - by Daniel Smith
    Hi guys, I have set up a PHP form for a competition, for users to enter and all information to be stored in a database. I used the following NetTut+ tutorial to do so: http://tr.im/SwAd. I've got the form submitting to the database as required, but with so many additional questions being asked, I would like to split the form into two separate sections. Obviously the first page would say continue to the next step before the second step allowing for the form to be submitted to the database. The content that the user sees should be split, but should all be a part of the same form. Step 1 Step 2 before submission. Would anyone know of or recommend any methods to do this? I'm a beginner so please be nice. :) Cheers, Daniel

    Read the article

  • How do we know if a query is cache or retrieved from database?

    - by Hadi
    For example: class Product has_many :sales_orders def total_items_deliverable self.sales_orders.each { |so| #sum the total } #give back the value end end class SalesOrder def self.deliverable # return array of sales_orders that are deliverable to customer end end SalesOrder.deliverable #give all sales_orders that are deliverable to customer pa = Product.find(1) pa.sales_orders.deliverable #give all sales_orders whose product_id is 1 and deliverable to customer pa.total_so_deliverable The very point that i'm going to ask is: how many times SalesOrder.deliverable is actually computed, from point 1, 3, and 4, They are computed 3 times that means 3 times access to database so having total_so_deliverable is promoting a fat model, but more database access. Alternatively (in view) i could iterate while displaying the content, so i ends up only accessing the database 2 times instead of 3 times. Any win win solution / best practice to this kind of problem ?

    Read the article

  • How can I create an executable to run on a certain processor architecture (instead of certain OS)?

    - by CrazyJugglerDrummer
    So I take my C++ program in Visual studio, compile, and it'll spit out a nice little EXE file. But EXEs will only run on windows, and I hear a lot about how C/C++ compiles into assembly language, which is runs directly on a processor. The EXE runs with the help of windows, or I could have a program that makes an executable that runs on a mac. But aren't I compiling C++ code into assembly language, which is processor specific? My Insights: I'm guessing I'm probably not. I know there's an Intel C++ compiler, so would it make processor-specific assembly code? EXEs run on windows, so they advantage of tons of things already set up, from graphics packages to the massive .NET framework. A processor-specific executable would be literally starting from scratch, with just the instruction set of the processor. Would this executable be a file-type? We could be running windows and open it, but then would control switch to processor only? I assume this executable would be something like an operating system, in that it would have to be run before anything else was booted up, and have only the processor instruction set to "use".

    Read the article

  • Preview result of update/insert query whithout comitting changes to database in MySQL?

    - by Camsoft
    I am writing a script to import CSV files into existing tables within my database. I decided to do the insert/update operations myself using PHP and INSERT/UPDATE statements, and not use MySQL's LOAD INFILE command, I have good reasons for this. What I would like to do is emulate the insert/update operations and display the results to the user, and then give them the option of confirming that this is OK, and then committing the changes to the database. I'm using InnoDB database engine with support for transactions. Not sure if this helps but was thinking down the line of insert/update, query data, display to user, then either commit or rollback transaction? Any advise would be appreciated.

    Read the article

  • how to get the size of a C global array into an assembly program written for the avr architecture co

    - by johannes
    I have a .c file with the following uint8_t buffer[32] I have a .S file where I want to do the following cpi r29, buffer+sizeof(buffer) The second argument for cpi muste be an imidiate value not a location. But unfortunetly sizeof() is a c operator. Both files, are getting compiled to seperate object files and linked afterwards. If I do avr-objdump -x file.c. Amongst other things, I get the size of the buffer. So it is already available in the object file. How do I access the size of the buffer in my assembly file at compile time?

    Read the article

  • Is this technique for stat tracking without a database workable?

    - by baptzmoffire
    If I wanted to create a chess game, for iOS, that tracked both player moves (for retracing the progression of a game and for player stats), what would be the simplest route to take? To clarify, I want to track not only the moves a player has made in a particular game, but how often that player has made that move in past games. For example I want to be able to track: How many times a given player has opened by moving the king pawn up two squares (e4) as white, on move number one. What is the percentage of time the player responds to white's e4 opening move, with moving his own king pawn to e5? What percentage of time does he respond by moving his queenside bishop pawn to c5? And so on. If it's not clear, the stat tracking system should also be able to report how many times this player, as black, move his queen to h1, on move number 30. I'm using Parse.com for my back-end as a server (BaaS) service. If I were to create a class that writes strings that identify move number, player color, moved piece, algebraic notation of the square (e.g. "d8") to a file, locally in the file system saves the file to Parse, and deletes the temporary file from file system upon opening the same game in my tableview (a la a "With Friends" game), download this file from Parse, parse through it and retrieve all stats/history, assign all relevant values to variables Is this plan viable, or is there an easier way?

    Read the article

  • How to leverage Spring Integration in a real-world JMS distributed architecture?

    - by ngeek
    For the following scenario I am looking for your advices and tips on best practices: In a distributed (mainly Java-based) system with: many (different) client applications (web-app, command-line tools, REST API) a central JMS message broker (currently in favor of using ActiveMQ) multiple stand-alone processing nodes (running on multiple remote machines, computing expensive operations of different types as specified by the JMS message payload) How would one best apply the JMS support provided by the Spring Integration framework to decouple the clients from the worker nodes? When reading through the reference documentation and some very first experiments it looks like the configuration of an JMS inbound adapter inherently require to use a subscriber, which in a decoupled scenario does not exist. Small side note: communication should happen via JMS text messages (using a JSON data structure for future extensibility).

    Read the article

  • Is a .Net membership database portable, or are accounts somehow bound to the originating Web site or

    - by Deane
    I have an ASP.Net Web site using .Net Membership with a SQL Server provider, so the users and roles are stored in the SQL tables created by Aspnet_regsql.exe. Is this architecture totally self-contained and portable, or are users in it somehow bound to the specific Web site on which they create their account? Put another way, if we create a bunch of users in dev or UAT, the back up and restore this database to another server, accessed under another domain name, should it still work just fine? We're seeing some odd behavior when we move the database, like users losing group affiliation and such, and I'm curious how portable and environment-agnostic this database really is. I have a sneaking suspicion that something is bound to the machine key or the domain.

    Read the article

  • How to make a remote connection to a MySQL Database Server?

    - by MLB
    Hello: I am trying to connect to a MySQL database server (remote), but I can't. I am using an user with grant privileges (not root user). The error message is the following: Can't obtain database list from the server. Access denied for user 'myuser'@'mypcname' (using password: YES) "myuser" is an user I created with grant access. This user allows me to connect locally to every database. I am using the same software versions in both hosts: MySQL Server 4.1 (server) and EMS SQL Manager 2005 for MySQL, edition 3.7.0.1 (client). The point is that I need to connect to the remote server using a different user, not root user. So, how to make the connection? Thanks.

    Read the article

  • How should I store a Game Database on Android?

    - by Liam
    I'm looking at creating a game for Android and while I have most of the ins and outs worked out, the one thing I'm struggling with is how to store data for the game. Ultimately, the game will be based off of a lot of pre-defined data and statistics so the obvious choice to me would be something like SQLite, but as I'm pretty new to the realm of Android and Game Development, I'm not 100% certain if this is the right route to follow. The data will be general pre-defined data as well as player data (along the lines of careers stats - what place finished, etc). I was wondering if there was a better/best practice solution that wasn't SQLite and that would provide said functionality and if so, could you point me in the right direction?

    Read the article

  • SQL SERVER – Backing Up and Recovering the Tail End of a Transaction Log – Notes from the Field #042

    - by Pinal Dave
    [Notes from Pinal]: The biggest challenge which people face is not taking backup, but the biggest challenge is to restore a backup successfully. I have seen so many different examples where users have failed to restore their database because they made some mistake while they take backup and were not aware of the same. Tail Log backup was such an issue in earlier version of SQL Server but in the latest version of SQL Server, Microsoft team has fixed the confusion with additional information on the backup and restore screen itself. Now they have additional information, there are a few more people confused as they have no clue about this. Previously they did not find this as a issue and now they are finding tail log as a new learning. Linchpin People are database coaches and wellness experts for a data driven world. In this 42nd episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words, Backing Up and Recovering the Tail End of a Transaction Log. Many times when restoring a database over an existing database SQL Server will warn you about needing to make a tail end of the log backup. This might be your reminder that you have to choose to overwrite the database or could be your reminder that you are about to write over and lose any transactions since the last transaction log backup. You might be asking yourself “What is the tail end of the transaction log”. The tail end of the transaction log is simply any committed transactions that have occurred since the last transaction log backup. This is a very crucial part of a recovery strategy if you are lucky enough to be able to capture this part of the log. Most organizations have chosen to accept some amount of data loss. You might be shaking your head at this statement however if your organization is taking transaction logs backup every 15 minutes, then your potential risk of data loss is up to 15 minutes. Depending on the extent of the issue causing you to have to perform a restore, you may or may not have access to the transaction log (LDF) to be able to back up those vital transactions. For example, if the storage array or disk that holds your transaction log file becomes corrupt or damaged then you wouldn’t be able to recover the tail end of the log. If you do have access to the physical log file then you can still back up the tail end of the log. In 2013 I presented a session at the PASS Summit called “The Ultimate Tail Log Backup and Restore” and have been invited back this year to present it again. During this session I demonstrate how you can back up the tail end of the log even after the data file becomes corrupt. In my demonstration I set my database offline and then delete the data file (MDF). The database can’t become more corrupt than that. I attempt to bring the database back online to change the state to RECOVERY PENDING and then backup the tail end of the log. I can do this by specifying WITH NO_TRUNCATE. Using NO_TRUNCATE is equivalent to specifying both COPY_ONLY and CONTINUE_AFTER_ERROR. It as its name says, does not try to truncate the log. This is a great demo however how could I achieve backing up the tail end of the log if the failure destroys my entire instance of SQL and all I had was the LDF file? During my demonstration I also demonstrate that I can attach the log file to a database on another instance and then back up the tail end of the log. If I am performing proper backups then my most recent full, differential and log files should be on a server other than the one that crashed. I am able to achieve this task by creating new database with the same name as the failed database. I then set the database offline, delete my data file and overwrite the log with my good log file. I attempt to bring the database back online and then backup the log with NO_TRUNCATE just like in the first example. I encourage each of you to view my blog post and watch the video demonstration on how to perform these tasks. I really hope that none of you ever have to perform this in production, however it is a really good idea to know how to do this just in case. It really isn’t a matter of “IF” you will have to perform a restore of a production system but more of a “WHEN”. Being able to recover the tail end of the log in these sever cases could be the difference of having to notify all your business customers of data loss or not. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How to programmatically set a data cell of database in VB.Net?

    - by manuel
    I have a Microsoft Access database file connected to VB.NET. In the database table, I have a 'No.' and a 'Status' column. Then I have a textbox where I can input an integer. I also have a button, which will change the data cell in 'Status' where 'No.' = textbox.Text(), when I click it. Let's say I want the 'Status' cell to be changed to "Low". What codes should I put in the button_Click event handler? Here is the codes I used to retrieve and display the database table. Public Class theForm Dim con As New OleDb.OleDbConnection Dim ds As New DataSet Dim daTitles As OleDb.OleDbDataAdapter Private Sub theForm_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load con.ConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0; Data Source=|DataDirectory|\db1.mdb" con.Open() daTitles = New OleDb.OleDbDataAdapter("SELECT * FROM Titles", con) daTitles.Fill(ds, "Titles") DataGridView1.DataSource = ds.Tables("Titles") DataGridView1.AutoResizeColumns() End Sub

    Read the article

  • How to model my database when using entity framework 4?

    - by Junior Ewing
    Trying to wrap my head around the best approach in modelling a database when we are using Entity Framework 4 as the ORM layer. We are going to use asp.net mvc 2 for the application. Is it worth trying to model using the class diagram modeller that comes with Visual Studio 2010 where you graphically configure your models into the EDMX file and then generate out the database structure? I have run into a bunch of non trivial issues and for complex many to many mappings or multi primary key entities the answer is not that obvious even after poking around a while with the tools. I figure its easy at this point to give up and start modelling the DB using real, working DB modelling tools and then try to generate out the EDMX from the database, rather than trying to do the model first approach.

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >