Search Results

Search found 34465 results on 1379 pages for 'database permissions'.

Page 606/1379 | < Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL Monitor’s data repository

    - by Chris Lambrou
    As one of the developers of SQL Monitor, I often get requests passed on by our support people from customers who are looking to dip into SQL Monitor’s own data repository, in order to pull out bits of information that they’re interested in. Since there’s clearly interest out there in playing around directly with the data repository, I thought I’d write some blog posts to start to describe how it all works. The hardest part for me is knowing where to begin, since the schema of the data repository is pretty big. Hmmm… I guess it’s tricky for anyone to write anything but the most trivial of queries against the data repository without understanding the hierarchy of monitored objects, so perhaps my first post should start there. I always imagine that whenever a customer fires up SSMS and starts to explore their SQL Monitor data repository database, they become immediately bewildered by the schema – that was certainly my experience when I did so for the first time. The following query shows the number of different object types in the data repository schema: SELECT type_desc, COUNT(*) AS [count] FROM sys.objects GROUP BY type_desc ORDER BY type_desc;  type_desccount 1DEFAULT_CONSTRAINT63 2FOREIGN_KEY_CONSTRAINT181 3INTERNAL_TABLE3 4PRIMARY_KEY_CONSTRAINT190 5SERVICE_QUEUE3 6SQL_INLINE_TABLE_VALUED_FUNCTION381 7SQL_SCALAR_FUNCTION2 8SQL_STORED_PROCEDURE100 9SYSTEM_TABLE41 10UNIQUE_CONSTRAINT54 11USER_TABLE193 12VIEW124 With 193 tables, 124 views, 100 stored procedures and 381 table valued functions, that’s quite a hefty schema, and when you browse through it using SSMS, it can be a bit daunting at first. So, where to begin? Well, let’s narrow things down a bit and only look at the tables belonging to the data schema. That’s where all of the collected monitoring data is stored by SQL Monitor. The following query gives us the names of those tables: SELECT sch.name + '.' + obj.name AS [name] FROM sys.objects obj JOIN sys.schemas sch ON sch.schema_id = obj.schema_id WHERE obj.type_desc = 'USER_TABLE' AND sch.name = 'data' ORDER BY sch.name, obj.name; This query still returns 110 tables. I won’t show them all here, but let’s have a look at the first few of them:  name 1data.Cluster_Keys 2data.Cluster_Machine_ClockSkew_UnstableSamples 3data.Cluster_Machine_Cluster_StableSamples 4data.Cluster_Machine_Keys 5data.Cluster_Machine_LogicalDisk_Capacity_StableSamples 6data.Cluster_Machine_LogicalDisk_Keys 7data.Cluster_Machine_LogicalDisk_Sightings 8data.Cluster_Machine_LogicalDisk_UnstableSamples 9data.Cluster_Machine_LogicalDisk_Volume_StableSamples 10data.Cluster_Machine_Memory_Capacity_StableSamples 11data.Cluster_Machine_Memory_UnstableSamples 12data.Cluster_Machine_Network_Capacity_StableSamples 13data.Cluster_Machine_Network_Keys 14data.Cluster_Machine_Network_Sightings 15data.Cluster_Machine_Network_UnstableSamples 16data.Cluster_Machine_OperatingSystem_StableSamples 17data.Cluster_Machine_Ping_UnstableSamples 18data.Cluster_Machine_Process_Instances 19data.Cluster_Machine_Process_Keys 20data.Cluster_Machine_Process_Owner_Instances 21data.Cluster_Machine_Process_Sightings 22data.Cluster_Machine_Process_UnstableSamples 23… There are two things I want to draw your attention to: The table names describe a hierarchy of the different types of object that are monitored by SQL Monitor (e.g. clusters, machines and disks). For each object type in the hierarchy, there are multiple tables, ending in the suffixes _Keys, _Sightings, _StableSamples and _UnstableSamples. Not every object type has a table for every suffix, but the _Keys suffix is especially important and a _Keys table does indeed exist for every object type. In fact, if we limit the query to return only those tables ending in _Keys, we reveal the full object hierarchy: SELECT sch.name + '.' + obj.name AS [name] FROM sys.objects obj JOIN sys.schemas sch ON sch.schema_id = obj.schema_id WHERE obj.type_desc = 'USER_TABLE' AND sch.name = 'data' AND obj.name LIKE '%_Keys' ORDER BY sch.name, obj.name;  name 1data.Cluster_Keys 2data.Cluster_Machine_Keys 3data.Cluster_Machine_LogicalDisk_Keys 4data.Cluster_Machine_Network_Keys 5data.Cluster_Machine_Process_Keys 6data.Cluster_Machine_Services_Keys 7data.Cluster_ResourceGroup_Keys 8data.Cluster_ResourceGroup_Resource_Keys 9data.Cluster_SqlServer_Agent_Job_History_Keys 10data.Cluster_SqlServer_Agent_Job_Keys 11data.Cluster_SqlServer_Database_BackupType_Backup_Keys 12data.Cluster_SqlServer_Database_BackupType_Keys 13data.Cluster_SqlServer_Database_CustomMetric_Keys 14data.Cluster_SqlServer_Database_File_Keys 15data.Cluster_SqlServer_Database_Keys 16data.Cluster_SqlServer_Database_Table_Index_Keys 17data.Cluster_SqlServer_Database_Table_Keys 18data.Cluster_SqlServer_Error_Keys 19data.Cluster_SqlServer_Keys 20data.Cluster_SqlServer_Services_Keys 21data.Cluster_SqlServer_SqlProcess_Keys 22data.Cluster_SqlServer_TopQueries_Keys 23data.Cluster_SqlServer_Trace_Keys 24data.Group_Keys The full object type hierarchy looks like this: Cluster Machine LogicalDisk Network Process Services ResourceGroup Resource SqlServer Agent Job History Database BackupType Backup CustomMetric File Table Index Error Services SqlProcess TopQueries Trace Group Okay, but what about the individual objects themselves represented at each level in this hierarchy? Well that’s what the _Keys tables are for. This is probably best illustrated by way of a simple example – how can I query my own data repository to find the databases on my own PC for which monitoring data has been collected? Like this: SELECT clstr._Name AS cluster_name, srvr._Name AS instance_name, db._Name AS database_name FROM data.Cluster_SqlServer_Database_Keys db JOIN data.Cluster_SqlServer_Keys srvr ON db.ParentId = srvr.Id -- Note here how the parent of a Database is a Server JOIN data.Cluster_Keys clstr ON srvr.ParentId = clstr.Id -- Note here how the parent of a Server is a Cluster WHERE clstr._Name = 'dev-chrisl2' -- This is the hostname of my own PC ORDER BY clstr._Name, srvr._Name, db._Name;  cluster_nameinstance_namedatabase_name 1dev-chrisl2SqlMonitorData 2dev-chrisl2master 3dev-chrisl2model 4dev-chrisl2msdb 5dev-chrisl2mssqlsystemresource 6dev-chrisl2tempdb 7dev-chrisl2sql2005SqlMonitorData 8dev-chrisl2sql2005TestDatabase 9dev-chrisl2sql2005master 10dev-chrisl2sql2005model 11dev-chrisl2sql2005msdb 12dev-chrisl2sql2005mssqlsystemresource 13dev-chrisl2sql2005tempdb 14dev-chrisl2sql2008SqlMonitorData 15dev-chrisl2sql2008master 16dev-chrisl2sql2008model 17dev-chrisl2sql2008msdb 18dev-chrisl2sql2008mssqlsystemresource 19dev-chrisl2sql2008tempdb These results show that I have three SQL Server instances on my machine (a default instance, one named sql2005 and one named sql2008), and each instance has the usual set of system databases, along with a database named SqlMonitorData. Basically, this is where I test SQL Monitor on different versions of SQL Server, when I’m developing. There are a few important things we can learn from this query: Each _Keys table has a column named Id. This is the primary key. Each _Keys table has a column named ParentId. A foreign key relationship is defined between each _Keys table and its parent _Keys table in the hierarchy. There are two exceptions to this, Cluster_Keys and Group_Keys, because clusters and groups live at the root level of the object hierarchy. Each _Keys table has a column named _Name. This is used to uniquely identify objects in the table within the scope of the same shared parent object. Actually, that last item isn’t always true. In some cases, the _Name column is actually called something else. For example, the data.Cluster_Machine_Services_Keys table has a column named _ServiceName instead of _Name (sorry for the inconsistency). In other cases, a name isn’t sufficient to uniquely identify an object. For example, right now my PC has multiple processes running, all sharing the same name, Chrome (one for each tab open in my web-browser). In such cases, multiple columns are used to uniquely identify an object within the scope of the same shared parent object. Well, that’s it for now. I’ve given you enough information for you to explore the _Keys tables to see how objects are stored in your own data repositories. In a future post, I’ll try to explain how monitoring data is stored for each object, using the _StableSamples and _UnstableSamples tables. If you have any questions about this post, or suggestions for future posts, just submit them in the comments section below.

    Read the article

  • Connect to Google Talk without giving access to my full Google Account

    - by bneijt
    I would like to use Empathy as a Google Talk client. When I choose to go online, I'm asked to give [email protected] (Ubuntu) access to manage my online Photo's, Google Drive, Emails and Videos. I don't understand why Ubuntu has to have all these permissions on my Google account, does this mean Ubuntu will start syncing my Google account to Ubuntu One? Can I connect to Google Talk without granting access to my Videos, Emails and Photos to Ubuntu? What do you guys want with all my online things?

    Read the article

  • Decompilers - Myth or Fact ?

    - by Simon
    Lately I have been thinking of application security and binaries and decompilers. (FYI- Decompilers is just an anti-complier, the purpose is to get the source back from the binary) Is there such thing as "Perfect Decompiler"? or are binaries safe from reverse engineering? (For clarity sake, by "Perfect" I mean the original source files with all the variable names/macros/functions/classes/if possible comments in the respective headers and source files used to get the binary) What are some of the best practices used to prevent reverse engineering of software? Is it a major concern? Also is obfuscation/file permissions the only way to prevent unauthorized hacks on scripts? (call me a script-junky if you should)

    Read the article

  • Version control and project management for freelancing jobs

    - by Groo
    Are there version control and project management tools which "work well" with freelancing jobs, if I want to keep my customer involved in the project at all times? What concerns me is that repository hosting providers have their fees based on the "number of users", which I feel is the number which will constantly increase as I finish one project after another. For each project, for example, I would have to add permissions to my contractor to allow him to pull the source code and collaborate. So how does that work in practice? Do I "remove" the contractor from the project once it's done? This means I basically state that I offer no support and bugfixes anymore. Or do freelances end up paying more and more money for these services? Do you use such online services, or you host them by yourself? Or do you simply send your code to your customer by e-mail in weekly iterations?

    Read the article

  • Amazon Web Services (AWS) Plug-in for Oracle Enterprise Manager

    - by Anand Akela
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Contributed by Sunil Kunisetty and Daniel Chan Introduction and ArchitectureAs more and more enterprises deploy some of their non-critical workload on Amazon Web Services (AWS), it’s becoming critical to monitor those public AWS resources along side with their on-premise resources. Oracle recently announced Oracle Enterprise Manager Plug-in for Amazon Web Services (AWS) allows you to achieve that goal. The on-premise Oracle Enterprise Manager (EM12c) acts as a single tool to get a comprehensive view of your public AWS resources as well as your private cloud resources.  By deploying the plug-in within your Cloud Control environment, you gain the following management features: Monitor EBS, EC2 and RDS instances on Amazon Web Services Gather performance metrics and configuration details for AWS instances Raise alerts and violations based on thresholds set on monitoring Generate reports based on the gathered data Users of this Plug-in can leverage the rich Enterprise Manager features such as system promotion, incident generation based on thresholds, integration with 3rd party ticketing applications etc. AWS Monitoring via this Plug-in is enabled via Amazon CloudWatch API and the users of this Plug-in are responsible for supplying credentials for accessing AWS and the CloudWatch API. This Plug-in can only be deployed on an EM12C R2 platform and agent version should be at minimum 12c R2.Here is a pictorial view of the overall architecture: Amazon Elastic Block Store (EBS) Amazon Elastic Compute Cloud (EC2) Amazon Relational Database Service (RDS) Here are a few key features: Rich and exhaustive list of metrics. Metrics can be gathered from an Agent running outside AWS. Critical configuration information. Custom Home Pages with charts and AWS configuration information. Generate incidents based on thresholds set on monitoring data. Discovery and Monitoring AWS instances can be added to EM12C either via the EM12c User Interface (UI) or the EM12c Command Line Interface ( EMCLI)  by providing the AWS credentials (Secret Key and Access Key Id) as well as resource specific properties as target properties. Here is a quick mapping of target types and properties for each AWS resources AWS Resource Type Target Type Resource specific properties EBS Resource Amazon EBS Service CloudWatch base URI, EC2 Base URI, Period, Volume Id, Proxy Server and Port EC2 Resource Amazon EC2 Service CloudWatch base URI, EC2 Base URI, Period, Instance  Id, Proxy Server and Port RDS Resource Amazon RDS Service CloudWatch base URI, RDS Base URI, Period, Instance  Id, Proxy Server and Port Proxy server and port are optional and are only needed if the agent is within the firewall. Here is an emcli example to add an EC2 target. Please read the Installation and Readme guide for more details and step-by-step instructions to deploy  the plugin and adding the AWS the instances. ./emcli add_target \       -name="<target name>" \       -type="AmazonEC2Service" \       -host="<host>" \       -properties="ProxyHost=<proxy server>;ProxyPort=<proxy port>;EC2_BaseURI=http://ec2.<region>.amazonaws.com;BaseURI=http://monitoring.<region>.amazonaws.com;InstanceId=<EC2 instance Id>;Period=<data point periond>"  \     -subseparator=properties="=" ./emcli set_monitoring_credential \                 -set_name="AWSKeyCredentialSet"  \                 -target_name="<target name>"  \                 -target_type="AmazonEC2Service" \                 -cred_type="AWSKeyCredential"  \                 -attributes="AccessKeyId:<access key id>;SecretKey:<secret key>" Emcli utility is found under the ORACLE_HOME of EM12C install. Once the instance is discovered, the target will show up under the ‘All Targets’ list under “Amazon EC2 Service’. Once the instances are added, one can navigate to the custom homepages for these resource types. The custom home pages not only include critical metrics, but also vital configuration parameters and incidents raised for these instances.  By mapping the configuration parameters as instance properties, we can slice-and-dice and group various AWS instance by leveraging the EM12C Config search feature. The following configuration properties and metrics are collected for these Resource types. Resource Type Configuration Properties Metrics EBS Resource Volume Id, Volume Type, Device Name, Size, Availability Zone Response: Status Utilization: QueueLength, IdleTime Volume Statistics: ReadBrandwith, WriteBandwidth, ReadThroughput, WriteThroughput Operation Statistics: ReadSize, WriteSize, ReadLatency, WriteLatency EC2 Resource Instance ID, Owner Id, Root Device type, Instance Type. Availability Zone Response: Status CPU Utilization: CPU Utilization Disk I/O:  DiskReadBytes, DiskWriteBytes, DiskReadOps, DiskWriteOps, DiskReadRate, DiskWriteRate, DiskIOThroughput, DiskReadOpsRate, DiskWriteOpsRate, DiskOperationThroughput Network I/O : NetworkIn, NetworkOut, NetworkInRate, NetworkOutRate, NetworkThroughput RDS Resource Instance ID, Database Engine Name, Database Engine Version, Database Instance Class, Allocated Storage Size, Availability Zone Response: Status Disk I/O:  ReadIOPS, WriteIOPS, ReadLatency, WriteLatency, ReadThroughput, WriteThroughput DB Utilization:  BinLogDiskUsage, CPUUtilization, DatabaseConnections, FreeableMemory, ReplicaLag, SwapUsage Custom Home Pages As mentioned above, we have custom home pages for these target types that include basic configuration information,  last 24 hours availability, top metrics and the incidents generated. Here are few snapshots. EBS Instance Home Page: EC2 Instance Home Page: RDS Instance Home Page: Further Reading: 1)      AWS Plugin download 2)      Installation and  Read Me. 3)      Screenwatch on SlideShare 4)      Extensibility Programmer's Guide 5)      Amazon Web Services

    Read the article

  • dovecot can't compact mail folder /var/mail/username

    - by G. He
    ubuntu 11.10 32bit. Setup a dovecot imap server. Using Thunderbird on a different ubuntu machine (64bit) to access imap server. Everything else is fine, except I can not compact the deleted email in inbox, which is stored at /var/mail/username. Checking mail.log and I see this error message: Apr 3 00:10:11 autumn dovecot: imap(username): Error: file_dotlock_create(/var/mail/username) failed: Permission denied (euid=1000(username) egid=1000(username) missing +w perm: /var/mail, euid is not dir owner) (set mail_privileged_group=mail) what is wrong with the permission? Here are the permissions for the relevant files: $ ls -ld /var/mail drwxrwsr-x 2 mail mail 4096 2012-04-02 23:36 /var/mail $ ls -l /var/mail/username -rw------- 1 username mail 417 2012-04-02 23:36 /var/mail/username Anyone knows what's going on here?

    Read the article

  • Thank You for a Great Welcome for Oracle GoldenGate 11g Release 2

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Yesterday morning we had two launch webcasts for Oracle GoldenGate 11g Release 2. I had the pleasure to present, as well as moderate the Q&A panels in both of these webcasts. Both events had hundreds of live attendees, sending us over 150 questions. Even though we left 30 minutes for Q&A, it was not nearly enough time to address for all the insightful questions our audience sent. Our product management team and I really appreciate the interaction we had yesterday and we are starting to respond back with outstanding questions today. Oracle GoldenGate’s new release launch also had great welcome from the media. You can find the links for various articles on the new release below: ITBusinessEdge Oracle Embraces Cross-Platform Data Integration Information Week: Oracle Real-Time Advance Taps Compressed Data Integration Developer News, Oracle GoldenGate Adds Deeper Oracle Integration, Extends Real-Time Performance CIO, Oracle GoldenGate Buddies Up with Sibling Software DBTA, Real-Time Data Integration: Oracle GoldenGate 11g Release 2 Now Available CBR Oracle unveils GoldenGate 11g Release 2 real-time data integration application In this blog, I want to address some of the frequently asked questions that came up during the webcasts. You can find the top questions and their answers along with related resources below. We will continue to address frequently asked questions via future blogs. Q: Will the new Integrated Capture for Oracle Database replace the Classic Capture? If not, which one do I use when? A: No, Classic Capture will be around for long time. Core platform specific features, bug fixes, and patches will be available for both Capture processes.Oracle Database specific features will be only available in the Integrated Capture. The Integrated Capture for Oracle Database is an option for users that need to capture data from compressed tables or need support for XML data types, XA on RAC. Users who don’t leverage these features should continue to use our Classic Capture. For more information on Oracle GoldenGate 11g Release 2 I recommend to check out the White paper: Oracle GoldenGate 11gR2 New Features as well as other technical white papers we have on OTN.                                                         For those of you coming to OpenWorld, please attend the related session: Extracting Data in Oracle GoldenGate Integrated Capture Mode, Monday Oct 1st 1:45pm Moscone South – 102 to learn more about this new feature. Q: What is new in Conflict Detection and Resolution? And how does it work? A: There are now pre-built functions to identify the conditions under which an error occurs and how to handle the record when the condition occurs. Error conditions handled include inserts into a target table where the row already exists, updates or deletes to target table rows that exist, but the original source data (before columns) do not match the existing data in the target row, and updates or deletes where the row does not exist in the target database table.Foreach of these conditions a method to handle the error is specified.  Please check out our recent blog on this topic and the White paper: Oracle GoldenGate 11gR2 New Features white paper.  Also, for those attending OpenWorld please attend the session: Best Practices for Conflict Detection and Resolution in Oracle GoldenGate for Active/Active-  Wednesday Oct 3rd  3:30pm Mascone 3000 Q: Does Oracle GoldenGate Veridata and the Management Pack require additional licenses, or is it incorporated with the GoldenGate license? A: Oracle GoldenGate Veridata and Oracle Management Pack for Oracle GoldenGate are additional products and require separate licenses. Please check out Oracle's price list here. Q: Does GoldenGate - Oracle Enterprise Manager Plug-in require additional license? A: Oracle Enterprise Manager Plug-in is included in the Oracle Management Pack for Oracle GoldenGate license, which is separate from Oracle GoldenGate license. There is no separate license for the Enterprise Manager Plug-in by itself. Oracle GoldenGate Monitor, Oracle GoldenGate Director, and Enterprise Manager Plug-in are included in the Management Pack for Oracle GoldenGate license. Please check out Management Pack for Oracle GoldenGate data sheet for more info on this product bundle. Q: Is Oracle GoldenGate replacing Oracle Streams product? A: Oracle GoldenGate is the strategic data replication product. Therefore, Oracle Streams will continue to be supported, but will not be actively enhanced. Rather, the best elements of Oracle Streams will be added to Oracle GoldenGate. Conflict management is one of them and with the latest release Oracle GoldenGate has a more advanced conflict management offering. Current customers depending on Oracle Streams will continue to be fully supported. Q: How is Oracle GoldenGate different than Oracle Data Integrator? A: Oracle Data Integrator is designed for fast bulk data movement and transformation between heterogeneous systems, while GoldenGate is designed for real-time movement of transactions between heterogeneous systems. These two products are completely complementary where GoldenGate provides low-impact real-time change data capture and delivery to a staging area on the target. And Oracle Data Integrator transforms this data and loads the DW tables. In fact, Oracle Data Integrator integrates with GoldenGate to use GoldenGate’s Capture process as one option for its CDC mechanism. We have several customers that deployed GoldenGate and ODI together to feed real-time data to their data warehousing solutions. Please also check out Oracle Data Integrator Changed Data Capture with Oracle GoldenGate Data Sheet (PDF). Thank you again very much for welcoming Oracle GoldenGate 11g Release 2 and stay in touch with us for more exciting news, updates, and events.

    Read the article

  • Sesame update du jour: SL 4, OOB, Azure, and proxy support

    I've just published a new version of Sesame Data Browser. Here's what's new this time: Upgraded to Silverlight 4 Can run out-of-browser (OOB), with elevated permissions. This gives you an icon on your desktop and enables new scenarios. Note: The application is unsigned for the moment. Support for Windows Azure authentication Support for SQL Azure authentication If you are behind a proxy that requires authentication, just give Sesame a new try after clicking on "If you are behind a proxy that...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Be able to create edit and move files to and from /var/www

    - by Ryan Murphy
    I have installed lamp on Ubuntu 12.04. I try to access /var/www but it doesn't allow me to do anything as it says I don't have permissions. I have tried: 1. gksudo nautilus - which works but its very inconvenient way of doing things. 2. sudo adduser ryan www-data sudo chown -R www-data:www-data /var/www sudo chmod g+rw /var/www The above didn't work. I have googled and searched this site for a solution, but all the existing possible solutions have not worked.

    Read the article

  • Force SSL using 301 Redirect on IIS7 gets 401.1 Error

    - by user2879305
    I've got a site that is using an Execute URL in the 403.4 error page slot that calls a page named forcessl.aspx. Here's the contents of the file: strWork = Replace(strQUERY_STRING, "http", "https") strWork = Replace(strWork, "403;", "") strWork = Replace(strWork, "80", "") strSecureURL = strWork Response.Write(strSecureURL) Response.Redirect(strSecureURL) Catch ex As Exception End Try End If % This particular site gets a 401.1 error if https:// is not added to the url. I have several other sites using the same method that work fine and this one mirrors those in all ways that I can tell (folder permissions, etc). This new site is just a subdomain of the same domain that the other sites are using. The main domain has a wildcard SSL cert. What else should I check?

    Read the article

  • How to capitalize maximally on location-independence … my personal #1-incentive for working as a developer

    - by SomeGuy
    To me the ultimate beauty in working as a developer is the fact that given a nice CV, your are going to find a new job, everywhere at any time. So I would like to ask if somebody here as experience in working while travelling for example. Or job-hopping from metropolis to metropolis every, say, six months. For example I have been investigating for how to get to Brazil. But it seems like that working as an employee in Brazil would be no option, b/c it takes a lot of time/money/effort to get the proper visas and permissions. So the only practical solution would be to freelance and the just travel, while getting your job done wherever you are. I bet my ass that there are loads of IT-guys out there and on here who know exactly what I am talking about. I'm looking forward to interesting ideas and stories.

    Read the article

  • Why does Akonadi on KDE 4.6.0 refuse to start?

    - by Patches
    Akonadi refuses to start on my fresh installation of KDE 4.6.0 from the kubuntu-backports PPA on Ubuntu 10.10 Maverick Meerkat, preventing me from usking KMail. Here is the full error output: patches@pleistocene:~/.local/share$ akonadictl start Starting Akonadi Server... done. patches@pleistocene:~/.local/share$ Connecting to deprecated signal QDBusConnectionInterface::serviceOwnerChanged(QString,QString,QString) search paths: ("/home/patches/bin", "/usr/local/sbin", "/usr/local/bin", "/usr/sbin", "/usr/bin", "/sbin", "/bin", "/usr/games", "/usr/sbin", "/usr/local/sbin", "/usr/local/libexec", "/usr/libexec", "/opt/mysql/libexec", "/opt/local/lib/mysql5/bin", "/opt/mysql/sbin") Found mysql_install_db: "/usr/bin/mysql_install_db" Found mysqlcheck: "/usr/bin/mysqlcheck" Database process exited unexpectedly during initial connection! executable: "/usr/sbin/mysqld-akonadi" arguments: ("--defaults-file=/home/patches/.local/share/akonadi//mysql.conf", "--datadir=/home/patches/.local/share/akonadi/db_data/", "--socket=/home/patches/.local/share/akonadi/socket-pleistocene/mysql.socket") stdout: "" stderr: "Could not open required defaults file: /home/patches/.local/share/akonadi//mysql.conf Fatal error in defaults handling. Program aborted 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld-akonadi: Can't find file: './mysql/plugin.frm' (errno: 13) 110209 16:41:12 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110209 16:41:12 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File name ./ibdata1 InnoDB: File operation call: 'create'. InnoDB: Cannot continue operation. " exit code: 1 process error: "Unknown error" "[ 0: akonadiserver(_Z11akBacktracev+0x35) [0x8086055] 1: akonadiserver() [0x8086516] 2: [0xb772e400] 3: [0xb772e416] 4: /lib/libc.so.6(gsignal+0x51) [0xb6e9f941] 5: /lib/libc.so.6(abort+0x182) [0xb6ea2e42] 6: /usr/lib/libQtCore.so.4(_Z17qt_message_output9QtMsgTypePKc+0x8c) [0xb74d62dc] 7: akonadiserver(_ZN15FileDebugStream9writeDataEPKcx+0xc4) [0x8087574] 8: /usr/lib/libQtCore.so.4(_ZN9QIODevice5writeEPKcx+0x8e) [0xb757168e] 9: /usr/lib/libQtCore.so.4(+0x103425) [0xb7581425] 10: /usr/lib/libQtCore.so.4(_ZN11QTextStreamD1Ev+0x3d) [0xb758295d] 11: akonadiserver(_ZN6QDebugD1Ev+0x43) [0x8081b73] 12: akonadiserver(_ZN13DbConfigMysql19startInternalServerEv+0x1c27) [0x810c177] 13: akonadiserver(_ZN7Akonadi13AkonadiServer20startDatabaseProcessEv+0xe3) [0x8087a23] 14: akonadiserver(_ZN7Akonadi13AkonadiServerC1EP7QObject+0xca) [0x8088b6a] 15: akonadiserver(_ZN7Akonadi13AkonadiServer8instanceEv+0x48) [0x808a1d8] 16: akonadiserver(main+0x364) [0x8080fb4] 17: /lib/libc.so.6(__libc_start_main+0xe7) [0xb6e8bce7] 18: akonadiserver() [0x8080b81] ] " ProcessControl: Application 'akonadiserver' returned with exit code 255 (Unknown error) search paths: ("/home/patches/bin", "/usr/local/sbin", "/usr/local/bin", "/usr/sbin", "/usr/bin", "/sbin", "/bin", "/usr/games", "/usr/sbin", "/usr/local/sbin", "/usr/local/libexec", "/usr/libexec", "/opt/mysql/libexec", "/opt/local/lib/mysql5/bin", "/opt/mysql/sbin") Found mysql_install_db: "/usr/bin/mysql_install_db" Found mysqlcheck: "/usr/bin/mysqlcheck" Database process exited unexpectedly during initial connection! executable: "/usr/sbin/mysqld-akonadi" arguments: ("--defaults-file=/home/patches/.local/share/akonadi//mysql.conf", "--datadir=/home/patches/.local/share/akonadi/db_data/", "--socket=/home/patches/.local/share/akonadi/socket-pleistocene/mysql.socket") stdout: "" stderr: "Could not open required defaults file: /home/patches/.local/share/akonadi//mysql.conf Fatal error in defaults handling. Program aborted 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld-akonadi: Can't find file: './mysql/plugin.frm' (errno: 13) 110209 16:41:12 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110209 16:41:12 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File name ./ibdata1 InnoDB: File operation call: 'create'. InnoDB: Cannot continue operation. " exit code: 1 process error: "Unknown error" "[ 0: akonadiserver(_Z11akBacktracev+0x35) [0x8086055] 1: akonadiserver() [0x8086516] 2: [0xb77ae400] 3: [0xb77ae416] 4: /lib/libc.so.6(gsignal+0x51) [0xb6f1f941] 5: /lib/libc.so.6(abort+0x182) [0xb6f22e42] 6: /usr/lib/libQtCore.so.4(_Z17qt_message_output9QtMsgTypePKc+0x8c) [0xb75562dc] 7: akonadiserver(_ZN15FileDebugStream9writeDataEPKcx+0xc4) [0x8087574] 8: /usr/lib/libQtCore.so.4(_ZN9QIODevice5writeEPKcx+0x8e) [0xb75f168e] 9: /usr/lib/libQtCore.so.4(+0x103425) [0xb7601425] 10: /usr/lib/libQtCore.so.4(_ZN11QTextStreamD1Ev+0x3d) [0xb760295d] 11: akonadiserver(_ZN6QDebugD1Ev+0x43) [0x8081b73] 12: akonadiserver(_ZN13DbConfigMysql19startInternalServerEv+0x1c27) [0x810c177] 13: akonadiserver(_ZN7Akonadi13AkonadiServer20startDatabaseProcessEv+0xe3) [0x8087a23] 14: akonadiserver(_ZN7Akonadi13AkonadiServerC1EP7QObject+0xca) [0x8088b6a] 15: akonadiserver(_ZN7Akonadi13AkonadiServer8instanceEv+0x48) [0x808a1d8] 16: akonadiserver(main+0x364) [0x8080fb4] 17: /lib/libc.so.6(__libc_start_main+0xe7) [0xb6f0bce7] 18: akonadiserver() [0x8080b81] ] " ProcessControl: Application 'akonadiserver' returned with exit code 255 (Unknown error) search paths: ("/home/patches/bin", "/usr/local/sbin", "/usr/local/bin", "/usr/sbin", "/usr/bin", "/sbin", "/bin", "/usr/games", "/usr/sbin", "/usr/local/sbin", "/usr/local/libexec", "/usr/libexec", "/opt/mysql/libexec", "/opt/local/lib/mysql5/bin", "/opt/mysql/sbin") Found mysql_install_db: "/usr/bin/mysql_install_db" Found mysqlcheck: "/usr/bin/mysqlcheck" Database process exited unexpectedly during initial connection! executable: "/usr/sbin/mysqld-akonadi" arguments: ("--defaults-file=/home/patches/.local/share/akonadi//mysql.conf", "--datadir=/home/patches/.local/share/akonadi/db_data/", "--socket=/home/patches/.local/share/akonadi/socket-pleistocene/mysql.socket") stdout: "" stderr: "Could not open required defaults file: /home/patches/.local/share/akonadi//mysql.conf Fatal error in defaults handling. Program aborted 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld-akonadi: Can't find file: './mysql/plugin.frm' (errno: 13) 110209 16:41:12 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110209 16:41:12 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File name ./ibdata1 InnoDB: File operation call: 'create'. InnoDB: Cannot continue operation. " exit code: 1 process error: "Unknown error" "[ 0: akonadiserver(_Z11akBacktracev+0x35) [0x8086055] 1: akonadiserver() [0x8086516] 2: [0xb778b400] 3: [0xb778b416] 4: /lib/libc.so.6(gsignal+0x51) [0xb6efc941] 5: /lib/libc.so.6(abort+0x182) [0xb6effe42] 6: /usr/lib/libQtCore.so.4(_Z17qt_message_output9QtMsgTypePKc+0x8c) [0xb75332dc] 7: akonadiserver(_ZN15FileDebugStream9writeDataEPKcx+0xc4) [0x8087574] 8: /usr/lib/libQtCore.so.4(_ZN9QIODevice5writeEPKcx+0x8e) [0xb75ce68e] 9: /usr/lib/libQtCore.so.4(+0x103425) [0xb75de425] 10: /usr/lib/libQtCore.so.4(_ZN11QTextStreamD1Ev+0x3d) [0xb75df95d] 11: akonadiserver(_ZN6QDebugD1Ev+0x43) [0x8081b73] 12: akonadiserver(_ZN13DbConfigMysql19startInternalServerEv+0x1c27) [0x810c177] 13: akonadiserver(_ZN7Akonadi13AkonadiServer20startDatabaseProcessEv+0xe3) [0x8087a23] 14: akonadiserver(_ZN7Akonadi13AkonadiServerC1EP7QObject+0xca) [0x8088b6a] 15: akonadiserver(_ZN7Akonadi13AkonadiServer8instanceEv+0x48) [0x808a1d8] 16: akonadiserver(main+0x364) [0x8080fb4] 17: /lib/libc.so.6(__libc_start_main+0xe7) [0xb6ee8ce7] 18: akonadiserver() [0x8080b81] ] " ProcessControl: Application 'akonadiserver' returned with exit code 255 (Unknown error) search paths: ("/home/patches/bin", "/usr/local/sbin", "/usr/local/bin", "/usr/sbin", "/usr/bin", "/sbin", "/bin", "/usr/games", "/usr/sbin", "/usr/local/sbin", "/usr/local/libexec", "/usr/libexec", "/opt/mysql/libexec", "/opt/local/lib/mysql5/bin", "/opt/mysql/sbin") Found mysql_install_db: "/usr/bin/mysql_install_db" Found mysqlcheck: "/usr/bin/mysqlcheck" Database process exited unexpectedly during initial connection! executable: "/usr/sbin/mysqld-akonadi" arguments: ("--defaults-file=/home/patches/.local/share/akonadi//mysql.conf", "--datadir=/home/patches/.local/share/akonadi/db_data/", "--socket=/home/patches/.local/share/akonadi/socket-pleistocene/mysql.socket") stdout: "" stderr: "Could not open required defaults file: /home/patches/.local/share/akonadi//mysql.conf Fatal error in defaults handling. Program aborted 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld-akonadi: Can't find file: './mysql/plugin.frm' (errno: 13) 110209 16:41:12 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110209 16:41:12 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File name ./ibdata1 InnoDB: File operation call: 'create'. InnoDB: Cannot continue operation. " exit code: 1 process error: "Unknown error" "[ 0: akonadiserver(_Z11akBacktracev+0x35) [0x8086055] 1: akonadiserver() [0x8086516] 2: [0xb784e400] 3: [0xb784e416] 4: /lib/libc.so.6(gsignal+0x51) [0xb6fbf941] 5: /lib/libc.so.6(abort+0x182) [0xb6fc2e42] 6: /usr/lib/libQtCore.so.4(_Z17qt_message_output9QtMsgTypePKc+0x8c) [0xb75f62dc] 7: akonadiserver(_ZN15FileDebugStream9writeDataEPKcx+0xc4) [0x8087574] 8: /usr/lib/libQtCore.so.4(_ZN9QIODevice5writeEPKcx+0x8e) [0xb769168e] 9: /usr/lib/libQtCore.so.4(+0x103425) [0xb76a1425] 10: /usr/lib/libQtCore.so.4(_ZN11QTextStreamD1Ev+0x3d) [0xb76a295d] 11: akonadiserver(_ZN6QDebugD1Ev+0x43) [0x8081b73] 12: akonadiserver(_ZN13DbConfigMysql19startInternalServerEv+0x1c27) [0x810c177] 13: akonadiserver(_ZN7Akonadi13AkonadiServer20startDatabaseProcessEv+0xe3) [0x8087a23] 14: akonadiserver(_ZN7Akonadi13AkonadiServerC1EP7QObject+0xca) [0x8088b6a] 15: akonadiserver(_ZN7Akonadi13AkonadiServer8instanceEv+0x48) [0x808a1d8] 16: akonadiserver(main+0x364) [0x8080fb4] 17: /lib/libc.so.6(__libc_start_main+0xe7) [0xb6fabce7] 18: akonadiserver() [0x8080b81] ] " ProcessControl: Application 'akonadiserver' returned with exit code 255 (Unknown error) "akonadiserver" crashed too often and will not be restarted! I tried moving the ~/.local/share/akonadi folder and running it fresh, and I also tried starting Akonadi from a brand new user, all to no avail. Requested by @djeikyb: patches@pleistocene:~$ ls -ld ~/.local drwxrwx--- 3 patches patches 4096 2011-02-07 03:15 /home/patches/.local patches@pleistocene:~$ mysql_upgrade Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) when trying to connect FATAL ERROR: Upgrade failed patches@pleistocene:~$ mysql_upgrade -S ~/.local/share/akonadi/socket-pleistocene/ Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--socket=/home/patches/.local/share/akonadi/socket-pleistocene/' mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/home/patches/.local/share/akonadi/socket-pleistocene/' (111) when trying to connect FATAL ERROR: Upgrade failed

    Read the article

  • Pre-rentrée Oracle Open World 2012 : à vos agendas

    - by Eric Bezille
    A maintenant moins d'un mois de l’événement majeur d'Oracle, qui se tient comme chaque année à San Francisco, fin septembre, début octobre, les spéculations vont bon train sur les annonces qui vont y être dévoilées... Et sans lever le voile, je vous engage à prendre connaissance des sujets des "Key Notes" qui seront tenues par Larry Ellison, Mark Hurd, Thomas Kurian (responsable des développements logiciels) et John Fowler (responsable des développements systèmes) afin de vous donner un avant goût. Stratégie et Roadmaps Oracle Bien entendu, au-delà des séances plénières qui vous donnerons  une vision précise de la stratégie, et pour ceux qui seront sur place, je vous engage à ne pas manquer les séances d'approfondissement qui auront lieu dans la semaine, dont voici quelques morceaux choisis : "Accelerate your Business with the Oracle Hardware Advantage" avec John Fowler, le lundi 1er Octobre, 3:15pm-4:15pm "Why Oracle Softwares Runs Best on Oracle Hardware" , avec Bradley Carlile, le responsable des Benchmarks, le lundi 1er Octobre, 12:15pm-13:15pm "Engineered Systems - from Vision to Game-changing Results", avec Robert Shimp, le lundi 1er Octobre 1:45pm-2:45pm "Database and Application Consolidation on SPARC Supercluster", avec Hugo Rivero, responsable dans les équipes d'intégration matériels et logiciels, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle’s SPARC Server Strategy Update", avec Masood Heydari, responsable des développements serveurs SPARC, le mardi 2 Octobre, 10:15am - 11:15am "Oracle Solaris 11 Strategy, Engineering Insights, and Roadmap", avec Markus Flier, responsable des développements Solaris, le mercredi 3 Octobre, 10:15am - 11:15am "Oracle Virtualization Strategy and Roadmap", avec Wim Coekaerts, responsable des développement Oracle VM et Oracle Linux, le lundi 1er Octobre, 12:15pm-1:15pm "Big Data: The Big Story", avec Jean-Pierre Dijcks, responsable du développement produits Big Data, le lundi 1er Octobre, 3:15pm-4:15pm "Scaling with the Cloud: Strategies for Storage in Cloud Deployments", avec Christine Rogers,  Principal Product Manager, et Chris Wood, Senior Product Specialist, Stockage , le lundi 1er Octobre, 10:45am-11:45am Retours d'expériences et témoignages Si Oracle Open World est l'occasion de partager avec les équipes de développement d'Oracle en direct, c'est aussi l'occasion d'échanger avec des clients et experts qui ont mis en oeuvre  nos technologies pour bénéficier de leurs retours d'expériences, comme par exemple : "Oracle Optimized Solution for Siebel CRM at ACCOR", avec les témoignages d'Eric Wyttynck, directeur IT Multichannel & CRM  et Pascal Massenet, VP Loyalty & CRM systems, sur les bénéfices non seulement métiers, mais également projet et IT, le mercredi 3 Octobre, 1:15pm-2:15pm "Tips from AT&T: Oracle E-Business Suite, Oracle Database, and SPARC Enterprise", avec le retour d'expérience des experts Oracle, le mardi 2 Octobre, 11:45am-12:45pm "Creating a Maximum Availability Architecture with SPARC SuperCluster", avec le témoignage de Carte Wright, Database Engineer à CKI, le mercredi 3 Octobre, 11:45am-12:45pm "Multitenancy: Everybody Talks It, Oracle Walks It with Pillar Axiom Storage", avec le témoignage de Stephen Schleiger, Manager Systems Engineering de Navis, le lundi 1er Octobre, 1:45pm-2:45pm "Oracle Exadata for Database Consolidation: Best Practices", avec le retour d'expérience des experts Oracle ayant participé à la mise en oeuvre d'un grand client du monde bancaire, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle Exadata Customer Panel: Packaged Applications with Oracle Exadata", animé par Tim Shetler, VP Product Management, mardi 2 Octobre, 1:15pm-2:15pm "Big Data: Improving Nearline Data Throughput with the StorageTek SL8500 Modular Library System", avec le témoignage du CTO de CSC, Alan Powers, le jeudi 4 Octobre, 12:45pm-1:45pm "Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC", avec le témoignage de Syed Qadri, Lead DBA et Michael Arnold, System Architect d'US Cellular, le mardi 2 Octobre, 10:15am-11:15am "Transform Data Center TCO with Oracle Optimized Servers: A Customer Panel", avec les témoignages notamment d'AT&T et Liberty Global, le mardi 2 Octobre, 11:45am-12:45pm "Data Warehouse and Big Data Customers’ View of the Future", avec The Nielsen Company US, Turkcell, GE Retail Finance, Allianz Managed Operations and Services SE, le lundi 1er Octobre, 4:45pm-5:45pm "Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization", le témoignage de l'IT interne d'Oracle sur la transformation et la migration de l'ensemble de notre infrastructure de stockage, mardi 2 Octobre, 1:15pm-2:15pm Echanges avec les groupes d'utilisateurs et les équipes de développement Oracle Si vous avez prévu d'arriver suffisamment tôt, vous pourrez également échanger dès le dimanche avec les groupes d'utilisateurs, ou tous les soirs avec les équipes de développement Oracle sur des sujets comme : "To Exalogic or Not to Exalogic: An Architectural Journey", avec Todd Sheetz - Manager of DBA and Enterprise Architecture, Veolia Environmental Services, le dimanche 30 Septembre, 2:30pm-3:30pm "Oracle Exalytics and Oracle TimesTen for Exalytics Best Practices", avec Mark Rittman, de Rittman Mead Consulting Ltd, le dimanche 30 Septembre, 10:30am-11:30am "Introduction of Oracle Exadata at Telenet: Bringing BI to Warp Speed", avec Rudy Verlinden & Eric Bartholomeus - Managers IT infrastructure à Telenet, le dimanche 30 Septembre, 1:15pm-2:00pm "The Perfect Marriage: Sun ZFS Storage Appliance with Oracle Exadata", avec Melanie Polston, directeur, Data Management, de Novation et Charles Kim, Managing Director de Viscosity, le dimanche 30 Septembre, 9:00am-10am "Oracle’s Big Data Solutions: NoSQL, Connectors, R, and Appliance Technologies", avec Jean-Pierre Dijcks et les équipes de développement Oracle, le lundi 1er Octobre, 6:15pm-7:00pm Testez et évaluez les solutions Et pour finir, vous pouvez même tester les technologies au travers du Oracle DemoGrounds, (1133 Moscone South pour la partie Systèmes Oracle, OS, et Virtualisation) et des "Hands-on-Labs", comme : "Deploying an IaaS Environment with Oracle VM", le mardi 2 Octobre, 10:15am-11:15am "Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-on Lab", le mardi 2 Octobre, 11:45am-12:45pm (il est fortement conseillé d'avoir suivi le "Hands-on-Labs" précédent avant d'effectuer ce Lab. "x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance", le mercredi 3 Octobre, 5:00pm-6:00pm "StorageTek Tape Analytics: Managing Tape Has Never Been So Simple", le mercredi 3 Octobre, 1:15pm-2:15pm "Oracle’s Pillar Axiom 600 Storage System: Power and Ease", le lundi 1er Octobre, 12:15pm-1:15pm "Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c", le lundi 1er Octobre, 1:45pm-2:45pm "Managing Storage in the Cloud", le mardi 2 Octobre, 5:00pm-6:00pm "Learn How to Write MapReduce on Oracle’s Big Data Platform", le lundi 1er Octobre, 12:15pm-1:15pm "Oracle Big Data Analytics and R", le mardi 2 Octobre, 1:15pm-2:15pm "Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications", le lundi 1er Octobre, 10:45am-11:45am "Managing Your Data with Built-In Oracle Solaris ZFS Data Services in Release 11", le lundi 1er Octobre, 4:45pm-5:45pm "Virtualizing Your Oracle Solaris 11 Environment", le mardi 2 Octobre, 1:15pm-2:15pm "Large-Scale Installation and Deployment of Oracle Solaris 11", le mercredi 3 Octobre, 3:30pm-4:30pm En conclusion, une semaine très riche en perspective, et qui vous permettra de balayer l'ensemble des sujets au coeur de vos préoccupations, de la stratégie à l'implémentation... Cette semaine doit se préparer, pour tailler votre agenda sur mesure, à travers les plus de 2000 sessions dont je ne vous ai fait qu'un extrait, et dont vous pouvez retrouver l'ensemble en ligne.

    Read the article

  • Deleting old tomcat version and setting a new one

    - by Diego
    I had Apache Tomcat installed by apt-get, however I decided to get a newer one, performed apt-get remove tomcat7 and apt-get purge tomcat7. I installed a newer one means the bundled Tomcat Server in NetBeans install. However, Im still seeing the old fashioned page from former Tomcat install: It works ! If you're seeing this page via a web browser, it means you've setup Tomcat successfully. Congratulations! This is the default Tomcat home page. It can be found on the local filesystem at: /var/lib/tomcat7/webapps/ROOT/index.html I already set a different port in the server.xml file and whenever I go that site after executing the startup.sh file with sudo permissions I'm not getting any site like server (new one) isn't running. How can I still be getting the page from old Tomcat install!? When I execute the startup.sh log says all is set OK, so why isn't it working?

    Read the article

  • Gtk warning when opening Gedit in terminal

    - by dellphi
    Previously, I need to clear documents history, so I Googled and found this: http://www.watchingthenet.com/ubuntu-tip-clear-disable-recent-documents.html I did the step, and then when I opened gedit in root terminal, I've got this: root@dellph1-desktop:/# gedit (gedit:8224): GLib-CRITICAL **: g_bookmark_file_load_from_data: assertion `length != 0' failed (gedit:8224): Gtk-WARNING **: Attempting to store changes into `/root/.recently-used.xbel', but failed: Failed to rename file '/root/.recently-used.xbel.FP7PPV' to '/root/.recently-used.xbel': g_rename() failed: Operation not permitted (gedit:8224): Gtk-WARNING **: Attempting to set the permissions of `/root/.recently-used.xbel', but failed: Operation not permitted root@dellph1-desktop:/# And it's happpened in user terminal: dellph1@dellph1-desktop:~$ gedit (gedit:9408): Gtk-CRITICAL **: gtk_accel_label_set_accel_closure: assertion `gtk_accel_group_from_accel_closure (accel_closure) != NULL' failed dellph1@dellph1-desktop:~$ I really hope someone helps in this case, thank you.

    Read the article

  • su: Authentication failure

    - by user166999
    I have downloaded the Eset Nod32 Antivirus from its website, and I got a eset_nod32av_64bit_en.linux file. I have right-clicked on it, Properties - Permissions, and checked the Allow executing file as program option. Then I was able to run the program. The Installer starts and it asks for my root password. I enter it correctly, but then I get the following error message: "su: Authentication failure". What can I do with this to install the Nod32? Thanks!

    Read the article

  • Looking for WAMP Benchmarking (my current WAMP is very slow, so are other solutions)

    - by therobyouknow
    I'm running ZWAMP WAMP stack on my local development machine. However I have found it to be very slow at serving pages from a Drupal site I have setup. By contrast, my live production site on shared hosting is reasonably quick. For me the goal with a local WAMP stack was to develop offline and send completed work to the live production site. I liked ZWAMP because it didn't require adjustments to User Access Control or other permissions. I've looked at Drupal Acquia Development Stack but found this too restrictive: only one site instance/doc root can be installed. I've looked at other DAMP stacks and heard reports of them being slow. My local development machine that I am running the WAMP stack on is a Dual Core 2.6Ghz hyperthreaded Intel i7, 4Gb RAM, 7200rpm hard disk, running Windows 64bit professional. Surely this is fast enough. So I'm looking for: Causes of the slowness of the WAMP and how to improve the speed Benchmark data of various WAMP stacks

    Read the article

  • Digital Asset Management System

    - by Prashant
    I am looking for an opensource web-based digital asset management system. My requirements are to create a web based system where users can upload and download .zip, .jpg, .png, .pdf, .doc, .xls etc. media files. Also user management should be there, so that we can create multiple users and accordingly give them permissions. I have found one http://www.resourcespace.org/ but it looks a bit big and complicated. It is fitting to my need but I am looking and researching a bit more to get some good and more easy to use system. If anyone knows such web based system or tool, please share.

    Read the article

  • Java Cloud Service Integration using Web Service Data Control

    - by Jani Rautiainen
    Java Cloud Service (JCS) provides a platform to develop and deploy business applications in the cloud. In Fusion Applications Cloud deployments customers do not have the option to deploy custom applications developed with JDeveloper to ensure the integrity and supportability of the hosted application service. Instead the custom applications can be deployed to the JCS and integrated to the Fusion Application Cloud instance.This series of articles will go through the features of JCS, provide end-to-end examples on how to develop and deploy applications on JCS and how to integrate them with the Fusion Applications instance.In this article a custom application integrating with Fusion Application using Web Service Data Control will be implemented. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Pre-requisites Access to Cloud instance In order to deploy the application access to a JCS instance is needed, a free trial JCS instance can be obtained from Oracle Cloud site. To register you will need a credit card even if the credit card will not be charged. To register simply click "Try it" and choose the "Java" option. The confirmation email will contain the connection details. See this video for example of the registration. Once the request is processed you will be assigned 2 service instances; Java and Database. Applications deployed to the JCS must use Oracle Database Cloud Service as their underlying database. So when JCS instance is created a database instance is associated with it using a JDBC data source. The cloud services can be monitored and managed through the web UI. For details refer to Getting Started with Oracle Cloud. JDeveloper JDeveloper contains Cloud specific features related to e.g. connection and deployment. To use these features download the JDeveloper from JDeveloper download site by clicking the “Download JDeveloper 11.1.1.7.1 for ADF deployment on Oracle Cloud” link, this version of JDeveloper will have the JCS integration features that will be used in this article. For versions that do not include the Cloud integration features the Oracle Java Cloud Service SDK or the JCS Java Console can be used for deployment. For details on installing and configuring the JDeveloper refer to the installation guide. For details on SDK refer to Using the Command-Line Interface to Monitor Oracle Java Cloud Service and Using the Command-Line Interface to Manage Oracle Java Cloud Service. Create Application In this example the “JcsWsDemo” application created in the “Java Cloud Service Integration using Web Service Proxy” article is used as the base. Create Web Service Data Control In this example we will use a Web Service Data Control to integrate with Credit Rule Service in Fusion Applications. The data control will be used to query data from Fusion Applications using a web service call and present the data in a table. To generate the data control choose the “Model” project and navigate to "New -> All Technologies -> Business Tier -> Data Controls -> Web Service Data Control" and enter following: Name: CreditRuleServiceDC URL: https://ic-[POD].oracleoutsourcing.com/icCnSetupCreditRulesPublicService/CreditRuleService?WSDL Service: {{http://xmlns.oracle.com/apps/incentiveCompensation/cn/creditSetup/creditRule/creditRuleService/}CreditRuleService On step 2 select the “findRule” operation: Skip step 3 and on step 4 define the credentials to access the service. Do note that in this example these credentials are only used if testing locally, for JCS deployment credentials need to be manually updated on the EAR file: Click “Finish” and the proxy generation is done. Creating UI In order to use the data control we will need to populate complex objects FindCriteria and FindControl. For simplicity in this example we will create logic in a managed bean that populates the objects. Open “JcsWsDemoBean.java” and add the following logic: Map findCriteria; Map findControl; public void setFindCriteria(Map findCriteria) { this.findCriteria = findCriteria; } public Map getFindCriteria() { findCriteria = new HashMap(); findCriteria.put("fetchSize",10); findCriteria.put("fetchStart",0); return findCriteria; } public void setFindControl(Map findControl) { this.findControl = findControl; } public Map getFindControl() { findControl = new HashMap(); return findControl; } Open “JcsWsDemo.jspx”, navigate to “Data Controls -> CreditRuleServiceDC -> findRule(Object, Object) -> result” and drag and drop the “result” node into the “af:form” element in the page: On the “Edit Table Columns” remove all columns except “RuleId” and “Name”: On the “Edit Action Binding” window displayed enter reference to the java class created above by selecting “#{JcsWsDemoBean.findCriteria}”: Also define the value for the “findControl” by selecting “#{JcsWsDemoBean.findControl}”. Deploy to JCS For WS DC the authentication details need to be updated on the connection details before deploying. Open “connections.xml” by navigating “Application Resources -> Descriptors -> ADF META-INF -> connections.xml”: Change the user name and password entry from: <soap username="transportUserName" password="transportPassword" To match the access details for the target environment. Follow the same steps as documented in previous article ”Java Cloud Service ADF Web Application”. Once deployed the application can be accessed with URL: https://java-[identity domain].java.[data center].oraclecloudapps.com/JcsWsDemo-ViewController-context-root/faces/JcsWsDemo.jspx When accessed the first 10 rules in the system are displayed: Summary In this article we learned how to integrate with Fusion Applications using a Web Service Data Control in JCS. In future articles various other integration techniques will be covered. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • A New Threat To Web Applications: Connection String Parameter Pollution (CSPP)

    - by eric.maurice
    Hi, this is Shaomin Wang. I am a security analyst in Oracle's Security Alerts Group. My primary responsibility is to evaluate the security vulnerabilities reported externally by security researchers on Oracle Fusion Middleware and to ensure timely resolution through the Critical Patch Update. Today, I am going to talk about a serious type of attack: Connection String Parameter Pollution (CSPP). Earlier this year, at the Black Hat DC 2010 Conference, two Spanish security researchers, Jose Palazon and Chema Alonso, unveiled a new class of security vulnerabilities, which target insecure dynamic connections between web applications and databases. The attack called Connection String Parameter Pollution (CSPP) exploits specifically the semicolon delimited database connection strings that are constructed dynamically based on the user inputs from web applications. CSPP, if carried out successfully, can be used to steal user identities and hijack web credentials. CSPP is a high risk attack because of the relative ease with which it can be carried out (low access complexity) and the potential results it can have (high impact). In today's blog, we are going to first look at what connection strings are and then review the different ways connection string injections can be leveraged by malicious hackers. We will then discuss how CSPP differs from traditional connection string injection, and the measures organizations can take to prevent this kind of attacks. In web applications, a connection string is a set of values that specifies information to connect to backend data repositories, in most cases, databases. The connection string is passed to a provider or driver to initiate a connection. Vendors or manufacturers write their own providers for different databases. Since there are many different providers and each provider has multiple ways to make a connection, there are many different ways to write a connection string. Here are some examples of connection strings from Oracle Data Provider for .Net/ODP.Net: Oracle Data Provider for .Net / ODP.Net; Manufacturer: Oracle; Type: .NET Framework Class Library: - Using TNS Data Source = orcl; User ID = myUsername; Password = myPassword; - Using integrated security Data Source = orcl; Integrated Security = SSPI; - Using the Easy Connect Naming Method Data Source = username/password@//myserver:1521/my.server.com - Specifying Pooling parameters Data Source=myOracleDB; User Id=myUsername; Password=myPassword; Min Pool Size=10; Connection Lifetime=120; Connection Timeout=60; Incr Pool Size=5; Decr Pool Size=2; There are many variations of the connection strings, but the majority of connection strings are key value pairs delimited by semicolons. Attacks on connection strings are not new (see for example, this SANS White Paper on Securing SQL Connection String). Connection strings are vulnerable to injection attacks when dynamic string concatenation is used to build connection strings based on user input. When the user input is not validated or filtered, and malicious text or characters are not properly escaped, an attacker can potentially access sensitive data or resources. For a number of years now, vendors, including Oracle, have created connection string builder class tools to help developers generate valid connection strings and potentially prevent this kind of vulnerability. Unfortunately, not all application developers use these utilities because they are not aware of the danger posed by this kind of attacks. So how are Connection String parameter Pollution (CSPP) attacks different from traditional Connection String Injection attacks? First, let's look at what parameter pollution attacks are. Parameter pollution is a technique, which typically involves appending repeating parameters to the request strings to attack the receiving end. Much of the public attention around parameter pollution was initiated as a result of a presentation on HTTP Parameter Pollution attacks by Stefano Di Paola and Luca Carettoni delivered at the 2009 Appsec OWASP Conference in Poland. In HTTP Parameter Pollution attacks, an attacker submits additional parameters in HTTP GET/POST to a web application, and if these parameters have the same name as an existing parameter, the web application may react in different ways depends on how the web application and web server deal with multiple parameters with the same name. When applied to connections strings, the rule for the majority of database providers is the "last one wins" algorithm. If a KEYWORD=VALUE pair occurs more than once in the connection string, the value associated with the LAST occurrence is used. This opens the door to some serious attacks. By way of example, in a web application, a user enters username and password; a subsequent connection string is generated to connect to the back end database. Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; In the password field, if the attacker enters "xxx; Integrated Security = true", the connection string becomes, Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; Intergrated Security = true; Under the "last one wins" principle, the web application will then try to connect to the database using the operating system account under which the application is running to bypass normal authentication. CSPP poses serious risks for unprepared organizations. It can be particularly dangerous if an Enterprise Systems Management web front-end is compromised, because attackers can then gain access to control panels to configure databases, systems accounts, etc. Fortunately, organizations can take steps to prevent this kind of attacks. CSPP falls into the Injection category of attacks like Cross Site Scripting or SQL Injection, which are made possible when inputs from users are not properly escaped or sanitized. Escaping is a technique used to ensure that characters (mostly from user inputs) are treated as data, not as characters, that is relevant to the interpreter's parser. Software developers need to become aware of the danger of these attacks and learn about the defenses mechanism they need to introduce in their code. As well, software vendors need to provide templates or classes to facilitate coding and eliminate developers' guesswork for protecting against such vulnerabilities. Oracle has introduced the OracleConnectionStringBuilder class in Oracle Data Provider for .NET. Using this class, developers can employ a configuration file to provide the connection string and/or dynamically set the values through key/value pairs. It makes creating connection strings less error-prone and easier to manager, and ultimately using the OracleConnectionStringBuilder class provides better security against injection into connection strings. For More Information: - The OracleConnectionStringBuilder is located at http://download.oracle.com/docs/cd/B28359_01/win.111/b28375/OracleConnectionStringBuilderClass.htm - Oracle has developed a publicly available course on preventing SQL Injections. The Server Technologies Curriculum course "Defending Against SQL Injection Attacks!" is located at http://st-curriculum.oracle.com/tutorial/SQLInjection/index.htm - The OWASP web site also provides a number of useful resources. It is located at http://www.owasp.org/index.php/Main_Page

    Read the article

  • How to Create AppArmor Profiles to Lock Down Programs on Ubuntu

    - by Chris Hoffman
    AppArmor locks down programs on your Ubuntu system, allowing them only the permissions they require in normal use – particularly useful for server software that may become compromised. AppArmor includes simple tools you can use to lock down other applications. AppArmor is included by default in Ubuntu and some other Linux distributions. Ubuntu ships AppArmor with several profiles, but you can also create your own AppArmor profiles. AppArmor’s utilities can monitor a program’s execution and help you create a profile. Before creating your own profile for an application, you may want to check the apparmor-profiles package in Ubuntu’s repositories to see if a profile for the application you want to confine already exists. How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8 How to Banish Duplicate Photos with VisiPic

    Read the article

  • MySQL Cluster 7.3 Labs Release – Foreign Keys Are In!

    - by Mat Keep
    0 0 1 1097 6254 Homework 52 14 7337 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary (aka TL/DR): Support for Foreign Key constraints has been one of the most requested feature enhancements for MySQL Cluster. We are therefore extremely excited to announce that Foreign Keys are part of the first Labs Release of MySQL Cluster 7.3 – available for download, evaluation and feedback now! (Select the mysql-cluster-7.3-labs-June-2012 build) In this blog, I will attempt to discuss the design rationale, implementation, configuration and steps to get started in evaluating the first MySQL Cluster 7.3 Labs Release. Pace of Innovation It was only a couple of months ago that we announced the General Availability (GA) of MySQL Cluster 7.2, delivering 1 billion Queries per Minute, with 70x higher cross-shard JOIN performance, Memcached NoSQL key-value API and cross-data center replication.  This release has been a huge hit, with downloads and deployments quickly reaching record levels. The announcement of the first MySQL Cluster 7.3 Early Access lab release at today's MySQL Innovation Day event demonstrates the continued pace in Cluster development, and provides an opportunity for the community to evaluate and feedback on new features they want to see. What’s the Plan for MySQL Cluster 7.3? Well, Foreign Keys, as you may have gathered by now (!), and this is the focus of this first Labs Release. As with MySQL Cluster 7.2, we plan to publish a series of preview releases for 7.3 that will incrementally add new candidate features for a final GA release (subject to usual safe harbor statement below*), including: - New NoSQL APIs; - Features to automate the configuration and provisioning of multi-node clusters, on premise or in the cloud; - Performance and scalability enhancements; - Taking advantage of features in the latest MySQL 5.x Server GA. Design Rationale MySQL Cluster is designed as a “Not-Only-SQL” database. It combines attributes that enable users to blend the best of both relational and NoSQL technologies into solutions that deliver web scalability with 99.999% availability and real-time performance, including: Concurrent NoSQL and SQL access to the database; Auto-sharding with simple scale-out across commodity hardware; Multi-master replication with failover and recovery both within and across data centers; Shared-nothing architecture with no single point of failure; Online scaling and schema changes; ACID compliance and support for complex queries, across shards. Native support for Foreign Key constraints enables users to extend the benefits of MySQL Cluster into a broader range of use-cases, including: - Packaged applications in areas such as eCommerce and Web Content Management that prescribe databases with Foreign Key support. - In-house developments benefiting from Foreign Key constraints to simplify data models and eliminate the additional application logic needed to maintain data consistency and integrity between tables. Implementation The Foreign Key functionality is implemented directly within MySQL Cluster’s data nodes, allowing any client API accessing the cluster to benefit from them – whether using SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA or HTTP/REST.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation. An important difference to note with the Foreign Key implementation in InnoDB is that MySQL Cluster does not support the updating of Primary Keys from within the Data Nodes themselves - instead the UPDATE is emulated with a DELETE followed by an INSERT operation. Therefore an UPDATE operation will return an error if the parent reference is using a Primary Key, unless using CASCADE action, in which case the delete operation will result in the corresponding rows in the child table being deleted. The Engineering team plans to change this behavior in a subsequent preview release. Also note that when using InnoDB "NO ACTION" is identical to "RESTRICT". In the case of MySQL Cluster “NO ACTION” means “deferred check”, i.e. the constraint is checked before commit, allowing user-defined triggers to automatically make changes in order to satisfy the Foreign Key constraints. Configuration There is nothing special you have to do here – Foreign Key constraint checking is enabled by default. If you intend to migrate existing tables from another database or storage engine, for example from InnoDB, there are a couple of best practices to observe: 1. Analyze the structure of the Foreign Key graph and run the ALTER TABLE ENGINE=NDB in the correct sequence to ensure constraints are enforced 2. Alternatively drop the Foreign Key constraints prior to the import process and then recreate when complete. Getting Started Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  You can download MySQL Cluster 7.3 Labs Release with Foreign Keys today - (select the mysql-cluster-7.3-labs-June-2012 build) If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3) Post any questions to the MySQL Cluster forum where our Engineering team will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section of this blog. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. This first Labs Release of MySQL Cluster 7.3 gives you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases. * Safe Harbor Statement This information is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • Announcing: Improvements to the Windows Azure Portal

    - by ScottGu
    Earlier today we released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Service Bus Management and Monitoring Support for Managing Co-administrators Import/Export support for SQL Databases Virtual Machine Experience Enhancements Improved Cloud Service Status Notifications Media Services Monitoring Support Storage Container Creation and Access Control Support All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Service Bus Management and Monitoring The new Windows Azure Management Portal now supports Service Bus management and monitoring. Service Bus provides rich messaging infrastructure that can sit between applications (or between cloud and on-premise environments) and allow them to communicate in a loosely coupled way for improved scale and resiliency. With the new Service Bus experience, you can now create and manage Service Bus Namespaces, Queues, Topics, Relays and Subscriptions. You can also get rich monitoring for Service Bus Queues, Topics and Subscriptions. To create a Service Bus namespace, you can now select the “Service Bus” tab in the Windows Azure portal and then simply select the CREATE command: Doing so will bring up a new “Create a Namespace” dialog that allows you to name and create a new Service Bus Namespace: Once created, you can obtain security credentials associated with the Namespace via the ACCESS KEY command. This gives you the ability to obtain the connection string associated with the service namespace. You can copy and paste these values into any application that requires these credentials: It is also now easy to create Service Bus Queues and Topics via the NEW experience in the portal drawer.  Simply click the NEW command and navigate to the “App Services” category to create a new Service Bus entity: Once you provision a new Queue or Topic it can be managed in the portal.  Clicking on a namespace will display all queues and topics within it: Clicking on an item in the list will allow you to drill down into a dashboard view that allows you to monitor the activity and traffic within it, as well as perform operations on it. For example, below is a view of an “orders” queue – note how we now surface both the incoming and outgoing message flow rate, as well as the total queue length and queue size: To monitor pub/sub subscriptions you can use the ADD METRICS command within a topic and select a specific subscription to monitor. Support for Managing Co-Administrators You can now add co-administrators for your Windows Azure subscription using the new Windows Azure Portal. This allows you to share management of your Windows Azure services with other users. Subscription co-administrators share the same administrative rights and permissions that service administrator have - except a co-administrator cannot change or view billing details about the account, nor remove the service administrator from a subscription. In the SETTINGS section, click on the ADMINISTRATORS tab, and select the ADD button to add a co-administrator to your subscription: To add a co-administrator, you specify the email address for a Microsoft account (formerly Windows Live ID) or an organizational account, and choose the subscription you want to add them to: You can later update the subscriptions that the co-administrator has access to by clicking on the EDIT button, and then select or deselect the subscriptions to which they belong. Import/Export Support for SQL Databases The Windows Azure administration portal now supports importing and exporting SQL Databases to/from Blob Storage.  Databases can be imported/exported to blob storage using the same BACPAC file format that is supported with SQL Server 2012.  Among other benefits, this makes it easy to copy and migrate databases between on-premise and cloud environments. SQL Databases now have an EXPORT command in the bottom drawer that when pressed will prompt you to save your database to a Windows Azure storage container: The UI allows you to choose an existing storage account or create a new one, as well as the name of the BACPAC file to persist in blob storage: You can also now import and create a new SQL Database by using the NEW command.  This will prompt you to select the storage container and file to import the database from: The Windows Azure Portal enables you to monitor the progress of import and export operations. If you choose to log out of the portal, you can come back later and check on the status of all of the operations in the new history tab of the SQL Database server – this shows your entire import and export history and the status (success/fail) of each: Enhancements to the Virtual Machine Experience One of the common pain-points we have heard from customers using the preview of our new Virtual Machine support has been the inability to delete the associated VHDs when a VM instance (or VM drive) gets deleted. Prior to today’s release the VHDs would continue to be in your storage account and accumulate storage charges. You can now navigate to the Disks tab within the Virtual Machine extension, select a VM disk to delete, and click the DELETE DISK command: When you click the DELETE DISK button you have the option to delete the disk + associated .VHD file (completely clearing it from storage).  Alternatively you can delete the disk but still retain a .VHD copy of it in storage. Improved Cloud Service Status Notifications The Windows Azure portal now exposes more information of the health status of role instances.  If any of the instances are in a non-running state, the status at the top of the dashboard will summarize the status (and update automatically as the role health changes): Clicking the instance hyperlink within this status summary view will navigate you to a detailed role instance view, and allow you to get more detailed health status of each of the instances.  The portal has been updated to provide more specific status information within this detailed view – giving you better visibility into the health of your app: Monitoring Support for Media Services Windows Azure Media Services allows you to create media processing jobs (for example: encoding media files) in your Windows Azure Media Services account. In the Windows Azure Portal, you can now monitor the number of encoding jobs that are queued up for processing as well as active, failed and queued tasks for encoding jobs. On your media services account dashboard, you can visualize the monitoring data for last 6 hours, 24 hours or 7 days. Storage Container Creation and Access Control Support You can now create Windows Azure Storage storage containers from within the Windows Azure Portal.  After selecting a storage account, you can navigate to the CONTAINERS tab and click the ADD CONTAINER command: This will display a dialog that lets you name the new container and control access to it: You can also update the access setting as well as container metadata of existing containers by selecting one and then using the new EDIT CONTAINER command: This will then bring up the edit container dialog that allows you to change and save its settings: In addition to creating and editing containers, you can click on them within the portal to drill-in and view blobs within them.  Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming later this month – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5, and support .NET 4.5 with Websites).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • .NET Security Part 3

    - by Simon Cooper
    You write a security-related application that allows addins to be used. These addins (as dlls) can be downloaded from anywhere, and, if allowed to run full-trust, could open a security hole in your application. So you want to restrict what the addin dlls can do, using a sandboxed appdomain, as explained in my previous posts. But there needs to be an interaction between the code running in the sandbox and the code that created the sandbox, so the sandboxed code can control or react to things that happen in the controlling application. Sandboxed code needs to be able to call code outside the sandbox. Now, there are various methods of allowing cross-appdomain calls, the two main ones being .NET Remoting with MarshalByRefObject, and WCF named pipes. I’m not going to cover the details of setting up such mechanisms here, or which you should choose for your specific situation; there are plenty of blogs and tutorials covering such issues elsewhere. What I’m going to concentrate on here is the more general problem of running fully-trusted code within a sandbox, which is required in most methods of app-domain communication and control. Defining assemblies as fully-trusted In my last post, I mentioned that when you create a sandboxed appdomain, you can pass in a list of assembly strongnames that run as full-trust within the appdomain: // get the Assembly object for the assembly Assembly assemblyWithApi = ... // get the StrongName from the assembly's collection of evidence StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>(); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain( "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName); Any assembly that is loaded into the sandbox with a strong name the same as one in the list of full-trust strong names is unconditionally given full-trust permissions within the sandbox, irregardless of permissions and sandbox setup. This is very powerful! You should only use this for assemblies that you trust as much as the code creating the sandbox. So now you have a class that you want the sandboxed code to call: // within assemblyWithApi public class MyApi { public static void MethodToDoThings() { ... } } // within the sandboxed dll public class UntrustedSandboxedClass { public void DodgyMethod() { ... MyApi.MethodToDoThings(); ... } } However, if you try to do this, you get quite an ugly exception: MethodAccessException: Attempt by security transparent method ‘UntrustedSandboxedClass.DodgyMethod()’ to access security critical method ‘MyApi.MethodToDoThings()’ failed. Security transparency, which I covered in my first post in the series, has entered the picture. Partially-trusted code runs at the Transparent security level, fully-trusted code runs at the Critical security level, and Transparent code cannot under any circumstances call Critical code. Security transparency and AllowPartiallyTrustedCallersAttribute So the solution is easy, right? Make MethodToDoThings SafeCritical, then the transparent code running in the sandbox can call the api: [SecuritySafeCritical] public static void MethodToDoThings() { ... } However, this doesn’t solve the problem. When you try again, exactly the same exception is thrown; MethodToDoThings is still running as Critical code. What’s going on? By default, a fully-trusted assembly always runs Critical code, irregardless of any security attributes on its types and methods. This is because it may not have been designed in a secure way when called from transparent code – as we’ll see in the next post, it is easy to open a security hole despite all the security protections .NET 4 offers. When exposing an assembly to be called from partially-trusted code, the entire assembly needs a security audit to decide what should be transparent, safe critical, or critical, and close any potential security holes. This is where AllowPartiallyTrustedCallersAttribute (APTCA) comes in. Without this attribute, fully-trusted assemblies run Critical code, and partially-trusted assemblies run Transparent code. When this attribute is applied to an assembly, it confirms that the assembly has had a full security audit, and it is safe to be called from untrusted code. All code in that assembly runs as Transparent, but SecurityCriticalAttribute and SecuritySafeCriticalAttribute can be applied to individual types and methods to make those run at the Critical or SafeCritical levels, with all the restrictions that entails. So, to allow the sandboxed assembly to call the full-trust API assembly, simply add APCTA to the API assembly: [assembly: AllowPartiallyTrustedCallers] and everything works as you expect. The sandboxed dll can call your API dll, and from there communicate with the rest of the application. Conclusion That’s the basics of running a full-trust assembly in a sandboxed appdomain, and allowing a sandboxed assembly to access it. The key is AllowPartiallyTrustedCallersAttribute, which is what lets partially-trusted code call a fully-trusted assembly. However, an assembly with APTCA applied to it means that you have run a full security audit of every type and member in the assembly. If you don’t, then you could inadvertently open a security hole. I’ll be looking at ways this can happen in my next post.

    Read the article

< Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >