Search Results

Search found 36863 results on 1475 pages for 'database structure'.

Page 87/1475 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • 12/12 Live Webcast: Introducing Next-Generation Enterprise Auditing and Database Firewall

    - by jgelhaus
    Join Oracle Security gurus to hear how Oracle products monitor Oracle and non-Oracle database traffic, detect unauthorized activity including SQL injection attacks, and block internal and external threats from reaching the database. Hear how organizations such as TransUnion Interactive and SquareTwo Financial rely on Oracle to monitor and secure their Oracle and non-Oracle database environments. Register for the webcast here.

    Read the article

  • Monitoring Database disk space

    - by Michael Freidgeim
    An article Data files: To Autogrow Or Not To Autogrow? recommends NOT to rely on auto-grow, because it causing delays in unplanned times.We should mtonitor database files(both data and log), and if they close to max capacity, manually increase the size. However it doesn't give references, how to monitor the free space inside databases. I've tried to look how to do it. It can be done manually using   execute sp_spaceused for the database in question or  sp_SOS (can be downloaded from http://searchsqlserver.techtarget.com/tip/Find-size-of-SQL-Server-tables-and-other-objects-with-stored-procedure)Alternatively you can run SQL commands as suggested in Http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=82359 by Michael Valentine Jonesselect [FREE_SPACE_MB] = convert(decimal(12,2),round((a.size-fileproperty(a.name,'SpaceUsed'))/128.000,2)) from dbo.sysfiles aMore useful article Monitor database file sizes with SQL Server Jobs describes how to setup monitoring Finally I found the excellent articleManaging Database Data Usage With Custom Space Alerts, that can be followed even support personnel without much DBA experience.

    Read the article

  • jMonkey Quest Database

    - by theJollySin
    I am building a game in jMonkey (Java) and I have so far only used default quest text. But now I need to start populating a lot of quests with text. My design requires A LOT of quests texts. What is the best way to build a database of quest texts in jMonkey? I don't have a lot of real experience with databases. Is there a database that integrates well with jMonkey? Here are the ideal properties I want in my database, in order of priority: Reasonably light learning curve Easy portability (in Java) to Windows, Linux, and Mac OSX Good interface with Java Good interface with jMonkey The ability to add properties to the quests: ID, level, gender, quest chain ID, etc. Or am I wrong in thinking I need to use some giant monster like SQL? I haven't been able to find much information on this, so are people using some non-database methods for storing things like quest text in jMonkey?

    Read the article

  • SQL SERVER – Core Concepts – Elasticity, Scalability and ACID Properties – Exploring NuoDB an Elastically Scalable Database System

    - by pinaldave
    I have been recently exploring Elasticity and Scalability attributes of databases. You can see that in my earlier blog posts about NuoDB where I wanted to look at Elasticity and Scalability concepts. The concepts are very interesting, and intriguing as well. I have discussed these concepts with my friend Joyti M and together we have come up with this interesting read. The goal of this article is to answer following simple questions What is Elasticity? What is Scalability? How ACID properties vary from NOSQL Concepts? What are the prevailing problems in the current database system architectures? Why is NuoDB  an innovative and welcome change in database paradigm? Elasticity This word’s original form is used in many different ways and honestly it does do a decent job in holding things together over the years as a person grows and contracts. Within the tech world, and specifically related to software systems (database, application servers), it has come to mean a few things - allow stretching of resources without reaching the breaking point (on demand). What are resources in this context? Resources are the usual suspects – RAM/CPU/IO/Bandwidth in the form of a container (a process or bunch of processes combined as modules). When it is about increasing resources the simplest idea which comes to mind is the addition of another container. Another container means adding a brand new physical node. When it is about adding a new node there are two questions which comes to mind. 1) Can we add another node to our software system? 2) If yes, does adding new node cause downtime for the system? Let us assume we have added new node, let us see what the new needs of the system are when a new node is added. Balancing incoming requests to multiple nodes Synchronization of a shared state across multiple nodes Identification of “downstate” and resolution action to bring it to “upstate” Well, adding a new node has its advantages as well. Here are few of the positive points Throughput can increase nearly horizontally across the node throughout the system Response times of application will increase as in-between layer interactions will be improved Now, Let us put the above concepts in the perspective of a Database. When we mention the term “running out of resources” or “application is bound to resources” the resources can be CPU, Memory or Bandwidth. The regular approach to “gain scalability” in the database is to look around for bottlenecks and increase the bottlenecked resource. When we have memory as a bottleneck we look at the data buffers, locks, query plans or indexes. After a point even this is not enough as there needs to be an efficient way of managing such large workload on a “single machine” across memory and CPU bound (right kind of scheduling)  workload. We next move on to either read/write separation of the workload or functionality-based sharing so that we still have control of the individual. But this requires lots of planning and change in client systems in terms of knowing where to go/update/read and for reporting applications to “aggregate the data” in an intelligent way. What we ideally need is an intelligent layer which allows us to do these things without us getting into managing, monitoring and distributing the workload. Scalability In the context of database/applications, scalability means three main things Ability to handle normal loads without pressure E.g. X users at the Y utilization of resources (CPU, Memory, Bandwidth) on the Z kind of hardware (4 processor, 32 GB machine with 15000 RPM SATA drives and 1 GHz Network switch) with T throughput Ability to scale up to expected peak load which is greater than normal load with acceptable response times Ability to provide acceptable response times across the system E.g. Response time in S milliseconds (or agreed upon unit of measure) – 90% of the time The Issue – Need of Scale In normal cases one can plan for the load testing to test out normal, peak, and stress scenarios to ensure specific hardware meets the needs. With help from Hardware and Software partners and best practices, bottlenecks can be identified and requisite resources added to the system. Unfortunately this vertical scale is expensive and difficult to achieve and most of the operational people need the ability to scale horizontally. This helps in getting better throughput as there are physical limits in terms of adding resources (Memory, CPU, Bandwidth and Storage) indefinitely. Today we have different options to achieve scalability: Read & Write Separation The idea here is to do actual writes to one store and configure slaves receiving the latest data with acceptable delays. Slaves can be used for balancing out reads. We can also explore functional separation or sharing as well. We can separate data operations by a specific identifier (e.g. region, year, month) and consolidate it for reporting purposes. For functional separation the major disadvantage is when schema changes or workload pattern changes. As the requirement grows one still needs to deal with scale need in manual ways by providing an abstraction in the middle tier code. Using NOSQL solutions The idea is to flatten out the structures in general to keep all values which are retrieved together at the same store and provide flexible schema. The issue with the stores is that they are compromising on mostly consistency (no ACID guarantees) and one has to use NON-SQL dialect to work with the store. The other major issue is about education with NOSQL solutions. Would one really want to make these compromises on the ability to connect and retrieve in simple SQL manner and learn other skill sets? Or for that matter give up on ACID guarantee and start dealing with consistency issues? Hybrid Deployment – Mac, Linux, Cloud, and Windows One of the challenges today that we see across On-premise vs Cloud infrastructure is a difference in abilities. Take for example SQL Azure – it is wonderful in its concepts of throttling (as it is shared deployment) of resources and ability to scale using federation. However, the same abilities are not available on premise. This is not a mistake, mind you – but a compromise of the sweet spot of workloads, customer requirements and operational SLAs which can be supported by the team. In today’s world it is imperative that databases are available across operating systems – which are a commodity and used by developers of all hues. An Ideal Database Ability List A system which allows a linear scale of the system (increase in throughput with reasonable response time) with the addition of resources A system which does not compromise on the ACID guarantees and require developers to learn new paradigms A system which does not force fit a new way interacting with database by learning Non-SQL dialect A system which does not force fit its mechanisms for providing availability across its various modules. Well NuoDB is the first database which has all of the above abilities and much more. In future articles I will cover my hands-on experience with it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Announcing: Oracle Database 11g R2 Certification on Oracle Linux 6

    - by Monica Kumar
    Oracle Announces the Certification of the Oracle Database on Oracle Linux 6 and Red Hat Enterprise Linux 6 Yesterday we announced the certification of Oracle Database 11g R2 with Oracle Linux 6 and Unbreakable Enterprise Kernel. Here are the key highlights: Oracle Database 11g Release 2 (R2) and Oracle Fusion Middleware 11g Release 1 (R1) are immediately available on Oracle Linux 6 with the Unbreakable Enterprise Kernel. Oracle Database 11g R2 and Oracle Fusion Middleware 11g R1 will be available on Red Hat Enterprise Linux 6 (RHEL6) and Oracle Linux 6 with the Red Hat Compatible Kernel in 90 days. Oracle offers direct Linux support to customers running RHEL6, Oracle Linux 6, or a combination of both. Oracle Linux will continue to maintain compatibility with Red Hat Linux. Read the full press release. 

    Read the article

  • Graph Isomorphism > What kind of Graph is this?

    - by oodavid
    Essentially, this is a variation of Comparing Two Tree Structures, however I do not have "trees", but rather another type of graph. I need to know what kind of Graph I have in order to figure out if there's a Graph Isomorphism Special Case... As you can see, they are: Not Directed Not A Tree Cyclic Max 4 connections But I still don't know the correct terminology, nor the which Isomorphism algorithm to pursue, guidance appreciated.

    Read the article

  • The Database as Intellectual Property

    - by Jonathan Kehayias
    Every so often, a question shows up on the forums in the form of, “How do I prevent anyone from accessing my database schema, including local administrators and sysadmins in SQL Server?”  I usually laugh a little shake my head when I read a question like this because it demonstrates an complete lack of understanding of the power an administrator has over SQL Server.  The simple answer is this: If you don’t want your database schema to ever be accessed or known, don’t distribute your database....(read more)

    Read the article

  • Using Live Data in Database Development Work

    - by Phil Factor
    Guest Editorial for Simple-Talk Newsletter... in which Phil Factor reacts with some exasperation when coming across a report that a majority of companies were still using financial and personal data for both developing and testing database applications. If you routinely test your development work using real production data that contains personal or financial information, you are probably being irresponsible, and at worst, risking a heavy financial penalty for your company. Surprisingly, over 80% of financial companies still do this. Plenty of data breaches and fraud have happened from the use of real data for testing, and a data breach is a nightmare for any organisation that suffers one. The cost of each data breach averages out at around $7.2 million in the US in notification, escalation, credit monitoring, fines, litigation, legal costs, and lost business due to customer churn, £1.9 million in the UK. 70% of data breaches are done from within the organisation. Real data can be exploited in a number of ways for malicious or criminal purposes. It isn't just the obvious use of items such as name and address, date of birth, social security number, and credit card and bank account numbers: Data can be exploited in many subtle ways, so there are excellent reasons to ensure that a high priority is given to the detection and prevention of any data breaches. You'll never successfully guess all the ways that real data can be exploited maliciously, or the ease with which it can be accessed. It would be silly to argue that developers never need access to a copy of the database containing live data. Developers sometimes need to track a bug that can only be replicated on the data from the live database. However, it has to be done in a very restrictive harness. The law makes no distinction between development and production databases when a data breach occurs, so the data has to be held with all appropriate security measures in place. In Europe, the use of personal data for testing requires the explicit consent of the people whose data is being held. There are federal standards such as GLBA, PCI DSS and HIPAA, and most US States have privacy legislation. The task of ensuring compliance and tight security in such circumstances is an expensive and time-consuming overhead. The developer is likely to suffer investigation if a data breach occurs, even if the company manages to stay in business. Ironically, the use of copies of live data isn't usually the most effective way to develop or test your data. Data is usually time-specific and isn't usually current by the time it is used for testing, Existing data doesn't help much for new functionality, and every time the data is refreshed from production, any test data is likely to be overwritten. Also, it is not always going to test all the 'edge' conditions that are likely to flush out bugs. You still have the task of simulating the dynamics of actual usage of the database, and here you have no alternative to creating 'spoofed' data. Because of the complexities of relational data, It used to be that there was no realistic alternative to developing and testing with live data. However, this is no longer the case. Real data can be obfuscated, or it can be created entirely from scratch. The latter process used to be impractical, now that there are plenty of third-party tools to choose from. The process of obfuscation isn't risk free. The process must access the live data, and the success of the obfuscation process has to be carefully monitored. Database data security isn't an exciting topic to you or I, but to a hacker it can be an all-consuming obsession, especially if there is financial or political gain involved. This is not the sort of adversary one would wish for and it is far better to accept, and work with, security restrictions that exist for using live data in database development work, especially when the tools exist to create large realistic database test data that can be better for several aspects of testing.

    Read the article

  • Announcement: Oracle Database Appliance 2.4 patch update now available

    - by uwes
    The Oracle Database Appliance 2.4 patch is now available from My Oracle Support (MOS).  If you search for the Oracle Database Appliance 2.4.0.0.0 Kit under Patches it will display the newly uploaded bundles. The patch highlights include: Normal redundancy (double-mirroring) option providing 6TB of usable storage Enhanced Diagnostics - Trace File Analyzer and ODACHK Also, if you review the README, you may see content that says:        "The grid infrastructure and database patching, both are rolling upgradable. During our patching, we patch the node 1 first and when completed, we patch the node 2." I would like to clarify that the 'infrastructure' updates (OS, Firmware, ILOM, etc) will require a  short downtime of the ODA while it is applied.  When you update the grid infrastructure (--gi), the appliance manager verifies that the infrastructure was updated so you cannot just patch the GI without first updating the infrastructure. The high level update patch steps include (but not limited to): Download patch update to your ODA The --infra (infrastructure) is updated and ODA Databases are down and the ODA is/may be rebooted ODA and GI/Databases are restarted Issue the command to update the Grid Infrastructure/databases (The order of the steps are completed automatically and you cannot control when the nodes are brought up and down during the patching) Node 1 -- shutdown databases and GI Node 1 -- patch GI/database Node 1 -- bring up databases and GI Node 2 -- shutdown databases and GI Node 2 -- patch GI/database Node 2 -- bring up databases and GI A replay from Friday's with Sohan on the 2.4 release can be found here.  The PDF of the presentation is here. The Data Sheet, WP, and 2.4 Configurator are available on the ODA OTN site.

    Read the article

  • Import images from camera in KDE with particular directory structure

    - by Sergey
    I have been using f-spot for a few years to manage my photo archive, which is about 50K images at the moment. With the development of f-spot slowing down in the recent years and me switching to KDE, I'm looking at using DigiKam, which seems to be very nice and packed with features beyond my wildest hopes :) One thing I'm missing though is the way f-spot was importing the images: it was creating subdirectories based on the image's shooting date: $HOME/Photos/2011/11/12/IMG_1234.jpg $HOME/Photos/2011/11/13/IMG_1235.jpg $HOME/Photos/2011/11/13/IMG_1236.jpg I don't seem to be able to find a way to make DigiKam to behave like this - although it has some settings to change the image filename according to some mask which may include shooting date, I see now way to tell it to create sub-directories. Is there a way to make DigiKam to behave like this? Or, alternatively, what is a good program to import images from a camera and save them on disk in subdirectories according to their shooting date?

    Read the article

  • Unable to add users to Microsoft Dynamics CRM 4.0 after database restore

    - by Wes Weeks
    Working with a client in our Multi-tenant CRM environment who was doing a database migration into CRM and as part of the process, a backup of their Organization_MSCRM database was taken just prior to starting the migration in case it needed to be restored and run a second time. In this case it did, so I restored the database and let the client know he should be good to go.  A few hours later I received a call that they were unable to add some new users, they would appear as available when using the add multiple user wizard, but anyone added would not be added to CRM.  It was also disucussed that these users had been added to CRM initally AFTER the database backup had been taken. I turned on tracing and tried to add the users through both the single user form and multiple user interface and was unable to do so.  The error message in the logs wasn't much help: Unexpected error adding user [email protected]: Microsoft.Crm.CrmException: INVALID_WRPC_TOKEN: Validate WRPC Token: WRPCTokenState=Invalid, TOKEN_EXPIRY=4320, IGNORE_TOKEN=False Searching on Google or bing didn't offer any assitance.  Apparently not a very common problem, or no one has been able to resolve. I did some searching in the MSCRM_CONFIG database and found that their are several user tables there and after getting my head around the structure found that there were enties here for users that were not part of the restored DB.  It seems that new users are added to both the Orgnaization_MSCRM and MSCRM_CONFIG and after the restore these were out of sync. I needed to remove the extra entries in order to address.  Restoring the MSCRM_CONFIG database was not an option as other clients could have been adding users at this point and to restore would risk breaking their instances of CRM.  Long story short, I was finally able to generate a script to remove the bad entries and when I tried to add users again, I was succesful.  In case someone else out there finds themselves in a similar situation, here is the script I used to delete the bad entries. DECLARE @UsersToDelete TABLE (   UserId uniqueidentifier )   Insert Into @UsersToDelete(UserId) Select UserId from [MSCRM_CONFIG].[dbo].[SystemUserOrganizations] Where CrmuserId Not in (select systemuserid from Organization_MSCRM.dbo.SystemUserBase) And OrganizationId = '00000000-643F-E011-0000-0050568572A1' --Id From the Organization table for this instance   Delete From [MSCRM_CONFIG].[dbo].[SystemUserAuthentication]   Where UserId in (Select UserId From @UsersToDelete)   Delete From [MSCRM_CONFIG].[dbo].[SystemUserOrganizations] Where UserId in (Select UserId From @UsersToDelete)   Delete From [MSCRM_CONFIG].[dbo].[SystemUser] Where Id in (Select UserId From @UsersToDelete)

    Read the article

  • Add a database to use with locate command

    - by Pedro Teran
    i would like to know if anyone knows how I can create a database of a file system on my computer. so I can choose this data base to search for files on this file system efficiently. I ask this question since in man locate I found that I can choose a database for a different file system. Also would be grate if /var/lib/mlocate/mlocate.db database can have the data of 2 disks any approach ideas or others would be greatly appreciated

    Read the article

  • Permissions & File Structure w/ nginx & multiple sites

    - by Michael
    I am using nginx for the first time as a long time Apache user. I setup a Linode to test everything and to eventually port over my websites. Previously I had /home/user/www (wwwroot) I am looking at doing something similar with /srv/www/domain/www (wwwroot) Rather than using /srv/domain (wwwroot), the reason is many of the sites are WordPress and one of the things I do for security is to move the config file one level above wwwroot and can't have multiple configuration files from multiple domains in the same top level folder. Since I own all the sites, I wasn't going to create a user for each domain. My user is a member of www-data and was going to use 2770 for www and have domain/www for each new domain. www would be owned by group www-data. Is this the best way to handle this?

    Read the article

  • How to structure game states in an entity/component-based system

    - by Eva
    I'm making a game designed with the entity-component paradigm that uses systems to communicate between components as explained here. I've reached the point in my development that I need to add game states (such as paused, playing, level start, round start, game over, etc.), but I'm not sure how to do it with my framework. I've looked at this code example on game states which everyone seems to reference, but I don't think it fits with my framework. It seems to have each state handling its own drawing and updating. My framework has a SystemManager that handles all the updating using systems. For example, here's my RenderingSystem class: public class RenderingSystem extends GameSystem { private GameView gameView_; /** * Constructor * Creates a new RenderingSystem. * @param gameManager The game manager. Used to get the game components. */ public RenderingSystem(GameManager gameManager) { super(gameManager); } /** * Method: registerGameView * Registers gameView into the RenderingSystem. * @param gameView The game view registered. */ public void registerGameView(GameView gameView) { gameView_ = gameView; } /** * Method: triggerRender * Adds a repaint call to the event queue for the dirty rectangle. */ public void triggerRender() { Rectangle dirtyRect = new Rectangle(); for (GameObject object : getRenderableObjects()) { GraphicsComponent graphicsComponent = object.getComponent(GraphicsComponent.class); dirtyRect.add(graphicsComponent.getDirtyRect()); } gameView_.repaint(dirtyRect); } /** * Method: renderGameView * Renders the game objects onto the game view. * @param g The graphics object that draws the game objects. */ public void renderGameView(Graphics g) { for (GameObject object : getRenderableObjects()) { GraphicsComponent graphicsComponent = object.getComponent(GraphicsComponent.class); if (!graphicsComponent.isVisible()) continue; GraphicsComponent.Shape shape = graphicsComponent.getShape(); BoundsComponent boundsComponent = object.getComponent(BoundsComponent.class); Rectangle bounds = boundsComponent.getBounds(); g.setColor(graphicsComponent.getColor()); if (shape == GraphicsComponent.Shape.RECTANGULAR) { g.fill3DRect(bounds.x, bounds.y, bounds.width, bounds.height, true); } else if (shape == GraphicsComponent.Shape.CIRCULAR) { g.fillOval(bounds.x, bounds.y, bounds.width, bounds.height); } } } /** * Method: getRenderableObjects * @return The renderable game objects. */ private HashSet<GameObject> getRenderableObjects() { return gameManager.getGameObjectManager().getRelevantObjects( getClass()); } } Also all the updating in my game is event-driven. I don't have a loop like theirs that simply updates everything at the same time. I like my framework because it makes it easy to add new GameObjects, but doesn't have the problems some component-based designs encounter when communicating between components. I would hate to chuck it just to get pause to work. Is there a way I can add game states to my game without removing the entity-component design? Does the game state example actually fit my framework, and I'm just missing something? EDIT: I might not have explained my framework well enough. My components are just data. If I was coding in C++, they'd probably be structs. Here's an example of one: public class BoundsComponent implements GameComponent { /** * The position of the game object. */ private Point pos_; /** * The size of the game object. */ private Dimension size_; /** * Constructor * Creates a new BoundsComponent for a game object with initial position * initialPos and initial size initialSize. The position and size combine * to make up the bounds. * @param initialPos The initial position of the game object. * @param initialSize The initial size of the game object. */ public BoundsComponent(Point initialPos, Dimension initialSize) { pos_ = initialPos; size_ = initialSize; } /** * Method: getBounds * @return The bounds of the game object. */ public Rectangle getBounds() { return new Rectangle(pos_, size_); } /** * Method: setPos * Sets the position of the game object to newPos. * @param newPos The value to which the position of the game object is * set. */ public void setPos(Point newPos) { pos_ = newPos; } } My components do not communicate with each other. Systems handle inter-component communication. My systems also do not communicate with each other. They have separate functionality and can easily be kept separate. The MovementSystem doesn't need to know what the RenderingSystem is rendering to move the game objects correctly; it just need to set the right values on the components, so that when the RenderingSystem renders the game objects, it has accurate data. The game state could not be a system, because it needs to interact with the systems rather than the components. It's not setting data; it's determining which functions need to be called. A GameStateComponent wouldn't make sense because all the game objects share one game state. Components are what make up objects and each one is different for each different object. For example, the game objects cannot have the same bounds. They can have overlapping bounds, but if they share a BoundsComponent, they're really the same object. Hopefully, this explanation makes my framework less confusing.

    Read the article

  • How to design a replay system

    - by daddz
    So how would I design a replay system? You may know it from certain games like Warcraft 3 or Starcraft where you can watch the game again after it has been played already. You end up with a relatively small replay file. So my questions are: How to save the data? (custom format?) (small filesize) What shall be saved? How to make it generic so it can be used in other games to record a time period (and not a complete match for example)? Make it possible to forward and rewind (WC3 couldn't rewind as far as I remember)

    Read the article

  • PowerShell One Liner: Duplicating a folder structure in a Sharepoint document library

    - by Darren Gosbell
    I was asked by someone at work the other day, if it was possible in Sharepoint to create a set of top level folders in one document library based on the set of folders in another library. One document library has a set of top level folders that is basically a client list and we needed to create the same top level folders in another library. I knew that it was possible to open a Sharepoint document library in explorer using a UNC style path and that you could map a drive using a technique like this one: http://www.endusersharepoint.com/2007/11/16/can-i-map-a-document-library-as-a-mapped-drive/. But while explorer would let us copy the folders, it would also take all of the folder contents too, which was not what we wanted. So I figured that some sort of PowerShell script was probably the way to go and it turned out to be even easier than I thought. The following script did it in one line, so I thought I would post it here in my "online memory". :) dir "\\sharepoint\client documents" | where {$_.PSIsContainer} | % {mkdir "\\sharepoint\admin documents\$($_.Name)"} I use "dir" to get a listing from the source folder, pipe it through "where" to get only objects that are folders and then do a foreach (using the % alias) and call "mkdir".

    Read the article

  • Generate a merge statement from table structure

    - by Nigel Rivett
    This code generates a merge statement joining on he natural key and checking all other columns to see if they have changed. The full version deals with type 2 processing and an audit trail but this version is useful. Just the insert or update part is handy too. Change the table at the top (spt_values in master in the version) and the join columns for the merge in @nk. The output generated is at the top and the code to run to generate it below. Output merge spt_values a using spt_values b on a.name = b.name and a.number = b.number and a.type = b.type when matched and (1=0 or (a.low b.low) or (a.low is null and b.low is not null) or (a.low is not null and b.low is null) or (a.high b.high) or (a.high is null and b.high is not null) or (a.high is not null and b.high is null) or (a.status b.status) or (a.status is null and b.status is not null) or (a.status is not null and b.status is null) ) then update set low = b.low , high = b.high , status = b.status when not matched by target then insert ( name , number , type , low , high , status ) values ( b.name , b.number , b.type , b.low , b.high , b.status ); Generator set nocount on declare @t varchar(128) = 'spt_values' declare @i int = 0 -- this is the natural key on the table used for the merge statement join declare @nk table (ColName varchar(128)) insert @nk select 'Number' insert @nk select 'Name' insert @nk select 'Type' declare @cols table (seq int, nkseq int, type int, colname varchar(128)) ;with cte as ( select ordinal_position, type = case when columnproperty(object_id(@t), COLUMN_NAME,'IsIdentity') = 1 then 3 when nk.ColName is not null then 1 else 0 end, COLUMN_NAME from information_schema.columns c left join @nk nk on c.column_name = nk.ColName where table_name = @t ) insert @cols (seq, nkseq, type, colname) select ordinal_position, row_number() over (partition by type order by ordinal_position) , type, COLUMN_NAME from cte declare @result table (i int, j int, k int, data varchar(500)) select @i = @i + 1 insert @result (i, data) select @i, 'merge ' + @t + ' a' select @i = @i + 1 insert @result (i, data) select @i, ' using cte b' select @i = @i + 1 insert @result (i, j, data) select @i, nkseq, ' ' + case when nkseq = 1 then 'on' else 'and' end + ' a.' + ColName + ' = b.' + ColName from @cols where type = 1 select @i = @i + 1 insert @result (i, data) select @i, ' when matched and (1=0' select @i = @i + 1 insert @result (i, j, k, data) select @i, seq, 1, ' or (a.' + ColName + ' b.' + ColName + ')' + ' or (a.' + ColName + ' is null and b.' + ColName + ' is not null)' + ' or (a.' + ColName + ' is not null and b.' + ColName + ' is null)' from @cols where type 1 select @i = @i + 1 insert @result (i, data) select @i, ' )' select @i = @i + 1 insert @result (i, data) select @i, ' then update set' select @i = @i + 1 insert @result (i, j, data) select @i, nkseq, ' ' + case when nkseq = 1 then ' ' else ', ' end + colname + ' = b.' + colname from @cols where type = 0 select @i = @i + 1 insert @result (i, data) select @i, ' when not matched by target then insert' select @i = @i + 1 insert @result (i, data) select @i, ' (' select @i = @i + 1 insert @result (i, j, data) select @i, seq, ' ' + case when seq = 1 then ' ' else ', ' end + colname from @cols where type 3 select @i = @i + 1 insert @result (i, data) select @i, ' )' select @i = @i + 1 insert @result (i, data) select @i, ' values' select @i = @i + 1 insert @result (i, data) select @i, ' (' select @i = @i + 1 insert @result (i, j, data) select @i, seq, ' ' + case when seq = 1 then ' ' else ', ' end + 'b.' + colname from @cols where type 3 select @i = @i + 1 insert @result (i, data) select @i, ' );' select data from @result order by i,j,k,data

    Read the article

  • Oracle Database 11g Release 2: Optimized for SAP

    - by jenny.gelhausen
    With the release of Oracle Database 11g Release 2 Enterprise Edition, Oracle has further enhanced its long-standing commitment to joint Oracle and SAP AG customers. Get more details on the release. The Oracle Database Insider sat down with Gerhard Kuppler, senior director Corporate SAP Account at Oracle, to find out just how much Oracle Database 11g Release 2 can impact SAP customers. Check out the interview details.

    Read the article

  • Linking Secrets - Part I - Linking Structure

    Google classes a link as a 'vote' for your website, as most people only link to a site if they are talking about it or referring to it as a good resource. This means the almighty link has become a huge factor in how well you rank in the search engines.

    Read the article

  • Oracle Database As Seen at Sapphire 2010

    - by jenny.gelhausen
    Seen around the Orange County Convention Center in Orlando Florida at the May 16-19th SAPPHIRE 2010 conference Oracle Database is the #1 Database for SAP applications Oracle Database 11g Release 2 is available for SAP. By upgrading you can lower the cost of your SAP applications infrastructure and improve your quality of service, so we encourage you to consider the upgrade.

    Read the article

  • How Do I Search For Struct Items In A Vector? [migrated]

    - by Vladimir Marenus
    I'm attempting to create an inventory system using a vector implementation, but I seem to be having some troubles. I'm running into issues using a struct I made. NOTE: This isn't actually in a game code, this is a separate Solution I am using to test my knowledge of vectors and structs! struct aItem { string itemName; int damage; }; int main() { aItem healingPotion; healingPotion.itemName = "Healing Potion"; healingPotion.damage= 6; aItem fireballPotion; fireballPotion.itemName = "Potion of Fiery Balls"; fireballPotion.damage = -2; vector<aItem> inventory; inventory.push_back(healingPotion); inventory.push_back(healingPotion); inventory.push_back(healingPotion); inventory.push_back(fireballPotion); if(find(inventory.begin(), inventory.end(), fireballPotion) != inventory.end()) { cout << "Found"; } system("PAUSE"); return 0; } The preceeding code gives me the following error: 1c:\program files (x86)\microsoft visual studio 11.0\vc\include\xutility(3186): error C2678: binary '==' : no operator found which takes a left-hand operand of type 'aItem' (or there is no acceptable conversion) There is more to the error, if you need it please let me know. I bet it's something small and silly, but I've been thumping at it for over two hours. Thanks in advance!

    Read the article

  • Convert filenames to their checksum before saving to prevent duplicates. Is is a smart thing to do?

    - by Xananax
    TL;DR:what the title says I am developing some sort of image board in PHP. I was thinking of changing each image's filename to it's checksum prior to saving it. This way, I might be able to prevent duplicates. I know this wouldn't work for two images that are the same but differ in size or level of compression or whatnot, but this method would allow for an early check. What bugs me is that I never saw this method implemented anywhere, so I was wondering if there is a catch to it. Maybe it is just more efficient to keep the original filename and store the hash in DB? Maybe the whole method is just not useful and my question is moot? What do you think? On a side note, I don't really get how hashes are calculated so I was wondering, if my first question checks out, if it would be possible to calculate the likeness that two images are similar by comparing hashes (levenshtein or something of the sort).

    Read the article

  • Oracle Database 11g Release 2 for Windows available!

    - by Mike Dietrich
    Hi there, just returned from vacation - and the Easter bunny (was its name Tux??) just delivered the Windows release (32bit and 64bit) of Oracle Database 11g Release 2. It's available for download from edelivery.oracle.com or OTN: Oracle Database 11g Release 2 for Windows 32-bit Oracle Database 11g Release 2 for Windows 64-bit And if you wonder yourself why it took sooooooo long to release Oracle Database 11g Release 2 on the Windows platform: The developers have incorporated a lot of the available fixes on top of 11.2.0.1.0 - so it's more a 11.2.0.1.1/2 ;-) And don't forget to download the newest version of rhe upgrade slides: http://apex.oracle.com/folien Use the keyword (Schluesselwort): upgrade112

    Read the article

  • Type dependencies vs directory structure

    - by paul
    Something I've been wondering about recently is how to organize types in directories/namespaces w.r.t. their dependencies. One method I've seen, which I believe is the recommendation for both Haskell and .NET (though I can't find the references for this at the moment), is: Type Type/ATypeThatUsesType Type/AnotherTypeThatUsesType My natural inclination, however, is to do the opposite: Type Type/ATypeUponWhichTypeDepends Type/AnotherTypeUponWhichTypeDepends Questions: Is my inclination bass-ackwards? Are there any major benefits/pitfalls of one over the other? Is it just something that depends on the application, e.g. whether you're building a library vs doing normal coding?

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >