IT security organizations are spending too much on data protection for compliance but not enough on securing key trade secrets, the real crown jewels of corporate data.
Last summer I set up Change Data Capture for a client to track changes to their application database to apply those changes to their data warehouse. The client had some issues a short while back and felt they needed to increase the retention period from the default 3 days to 5 days. I ran this query to make that change: sp_cdc_change_job @job_type='cleanup', @retention=7200 The value 7200 represents the number of minutes in a period of 5 days. All was well, but they recently asked how they can verify...(read more)
Is it possible to use ASP.NET Dynamic Data with SubSonic 3 in-place of Linq to SQL classes or the Entity Framework? MetaModel.RegisterContext() throws an exception if you use the context class that SubSonic generates. I thought I remembered coming across a SubSonic/Dynamic Data example back before SubSonic 3 was released but I can't find it now. Has anyone been able to get this to work?
I am looking for some well known algorithms that can be considered while handling very large amount of data.(Edit- By large amount of data I refer to records in a database excluding blobs). These algorithms if not in totality but in parts may be used in big web applications like Twitter, Last.fm , Amazon ,etc.
Specifically, I'm looking for names or links to such algorithms. My primary interest lies in developing a very deep understanding on working with large database records and writing efficient code for working with the same.
In my last question i asked how to best send a string from one view controller to another, both which were on a navigation stack:
http://stackoverflow.com/questions/2898860/pass-string-from-tableviewcontroller-to-viewcontroller-in-navigation-stack
However I just realised I can either pass the path to the file in the app's document's folder as the first (the table view) has already accessed the data in the file should I pass viewcontroller the data to the pushed VC?
Oracle's Master Data Management suite has seen remarkable development progress in the past year and a half. Leveraging out-of-the-box integration to applications provided by Application Integration Architecture, the cost, risk and time it takes to implement an MDM solution has been cut in half. Oracle Applications are now 'MDM Aware', Data Quality tools have reached state-of-the-art status, and new hubs are coming on line. In this AppsCast, Pascal Laik, VP MDM Products discusses this progress, what it means for Oracle customers, and where we are going from here.
I have a data.frame:
df<-data.frame(a=c("x","x","y","y"),b=c(1,2,3,4))
> df
a b
1 x 1
2 x 2
3 y 3
4 y 4
What's the easiest way to print out each pair of values as a list of strings like this:
"x1", "x2", "y1", "y2"
Choosing a long term data storage medium isn';t as easy as you may think. You might imagine that the data could be burnt to CD, locked in a cupboard and that it would last forever however unfortunatel... [Author: Chris Holgate - Computers and Internet - April 02, 2010]
The popular PTS (Platform Technology Services) technical trainings for partners now include a workshop on Big Data.
First workshop will take place in Milan on July 10-12. (You can register by clicking the link below)
Oracle Big Data
Technical
Workshop
July
10-12,
2012: Cinisello Balsamo, Milan, Italy
For more info contact [email protected]
My client wants to standardize address information for existing and future addresses collected for their customers, particularly the street suffixes. The application used to enter and collect address information has the street suffix separated from the address field, but it is a textbox instead of a drop down list therefore things are not standardized. I know there are some options out there to standarize data, but they would like a less expensive alternative. Are there any functions in SQL Server that I can use to standardized data?
States are passing more and more data security laws, the US Senate and the House have bills meandering through Congress, securing personal information and encrypting that data is no longer optional.
A corrupt table in MS Access means lost time and data. It can lead to a loss of revenue or even employment. Learn how you might be able to recover most of the data when the worst happens.
Among the big news this week in green data center management: Singapore pledges to cut carbon emissions by 16 percent by 2020, and for the second year in a row, data security is the primary concern and driver of IT asset disposition compliance efforts.
As Energy Star certification for data centers moves closer to becoming a reality, the EPA has also begun crafting specification for certifying data center storage devices.
If I have systems that are based on realtime data, how can I ensure that all the information that is current is redundantly stored in a file? So that when the program starts again, it uses this information to initialize itself back to where it was when it closed.
I know of xstream and HSQLDB. but wasn't sure if this was the best option for data that needs to be a literal carbon copy.
With Master Data Services, IT organizations can centrally manage critical data assets companywide.
Too many SQL Servers to keep up with?Download a free trial of SQL Response to monitor your SQL Servers in just one intuitive interface."The monitoringin SQL Response is excellent." Mike Towery.
When create an Oracle Database on the Amazon cloud you will need to store you database files somewhere on the EC2 cloud. There are basically three places where database files can be stored:
1. Local drive - This is the local drive that is part of the virtual server EC2 instance.
2. Elastic Block Storage (EBS) - Network attached storage that appears as a local drive.
3. Simple Storage Server (S3) - 'Storage for the Internet'.
S3 is not high speed and intended for store static document type files. S3 can also be used for storing static web page files. Local drives are ephemeral so not appropriate to be used as a database storage device. The leaves EBS which is the best place to store database files. EBS volumes appear as local disk drives. They are actually network-attached to an Amazon EC2 instance. In addition, EBS persists independently from the running life of a single Amazon EC2 instance.
If you use an EBS backed instance for your database data, it will remain available after reboot but not after terminate. In many cases you would not need to terminate your instance but only stop it, which is equivalent of shutdown. In order to save your database data before you terminate an instance, you can snapshot the EBS to S3.
Using EBS as a data store you can move your Oracle data files from one instance to another. This allows you to move your database from one region or or zone to another. Unfortunately, to scale out your Oracle RDS on AWS you can not have read only replicas. This is only possible with the other Oracle relational database - MySQL.
The free micro instances use EBS as its storage.
This is a very good white paper that has more details:
AWS Storage Options
This white paper also discusses: SQS, SimpleDB, and Amazon RDS in the context of storage devices. However, these are not storage devices you would use to store an Oracle database.
This slide deck discusses a lot of information that is in the white paper:
AWS Storage Options slideshow
After a junit test case ran, should delete test data related with this test case?
Will keeping the test data help the developers to debug the code?
Thanks
Joseph
There is a popular design for a database that requires a built-in audit-trail of amendments and additions, where data is never deleted, but merely superseded by a later version. Whilst this is conceptually simple, it has always made for complicated SQL for reporting the latest version of data. Alex joins the debate on the best way of doing this with an example using an indexed view and the filtered index.
df <- data.frame(var1=c('a', 'b', 'c'), var2=c('d', 'e', 'f'), freq=1:3)
What is the simplest way to expand the first two columns of the data.frame above, so that
each row appears the number of times specified in the column 'freq'?
In other words, go from this:
>df
var1 var2 freq
1 a d 1
2 b e 2
3 c f 3
To this:
>df.expanded
var1 var2
1 a d
2 b e
3 b e
4 c f
5 c f
6 c f
<b>eWeek:</b> "Data Apple collects about users from its vaunted iPhone is so valuable that the company must build a special search engine just to keep Google from gleaning insight from that data, analysts say."
Now a day nothing is more important than backing up your data of your computer. But there are still many people who do not understand the importance of protecting data. Therefore when they proceed fo... [Author: Susan Brown - Computers and Internet - May 08, 2010]
Denise Rogers discusses the essential tasks in conducting effective software evaluations revolving around data warehousing and business intellegence. Each step has a dependency on the previous one, starting with establishing the framework of the evaluation and adding progressively elaborate data that facilitates a decision making process that is resolute.
I'm currently developing a replication system to keep data in-synch between an arbitrary number of servers.
Some of these servers exist in one cluster on one LAN. Others exist somewhere else in the world.
I'm wondering what are the pros/cons of different paths that we choose to flow replicated data on between servers?
In other words, what are the different strategies to load balance the replication process ?