Search Results

Search found 63361 results on 2535 pages for 'dynamic data'.

Page 4/2535 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • Oracle Announces Oracle Big Data Appliance X3-2 and Enhanced Oracle Big Data Connectors

    - by jgelhaus
    Enables Customers to Easily Harness the Business Value of Big Data at Lower Cost Engineered System Simplifies Big Data for the Enterprise Oracle Big Data Appliance X3-2 hardware features the latest 8-core Intel® Xeon E5-2600 series of processors, and compared with previous generation, the 18 compute and storage servers with 648 TB raw storage now offer: 33 percent more processing power with 288 CPU cores; 33 percent more memory per node with 1.1 TB of main memory; and up to a 30 percent reduction in power and cooling Oracle Big Data Appliance X3-2 further simplifies implementation and management of big data by integrating all the hardware and software required to acquire, organize and analyze big data. It includes: Support for CDH4.1 including software upgrades developed collaboratively with Cloudera to simplify NameNode High Availability in Hadoop, eliminating the single point of failure in a Hadoop cluster; Oracle NoSQL Database Community Edition 2.0, the latest version that brings better Hadoop integration, elastic scaling and new APIs, including JSON and C support; The Oracle Enterprise Manager plug-in for Big Data Appliance that complements Cloudera Manager to enable users to more easily manage a Hadoop cluster; Updated distributions of Oracle Linux and Oracle Java Development Kit; An updated distribution of open source R, optimized to work with high performance multi-threaded math libraries Read More   Data sheet: Oracle Big Data Appliance X3-2 Oracle Big Data Appliance: Datacenter Network Integration Big Data and Natural Language: Extracting Insight From Text Thomson Reuters Discusses Oracle's Big Data Platform Connectors Integrate Hadoop with Oracle Big Data Ecosystem Oracle Big Data Connectors is a suite of software built by Oracle to integrate Apache Hadoop with Oracle Database, Oracle Data Integrator, and Oracle R Distribution. Enhancements to Oracle Big Data Connectors extend these data integration capabilities. With updates to every connector, this release includes: Oracle SQL Connector for Hadoop Distributed File System, for high performance SQL queries on Hadoop data from Oracle Database, enhanced with increased automation and querying of Hive tables and now supported within the Oracle Data Integrator Application Adapter for Hadoop; Transparent access to the Hive Query language from R and introduction of new analytic techniques executing natively in Hadoop, enabling R developers to be more productive by increasing access to Hadoop in the R environment. Read More Data sheet: Oracle Big Data Connectors High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

    Read the article

  • Best approach to accessing multiple data source in a web application

    - by ced
    I've a base web application developed with .net technologies (asp.net) used into our LAN by 30 users simultanousley. From this web application I've developed two verticalization used from online users. In future i expect hundreds users simultanousley. Our company has different locations. Each site use its own database. The web application needs to retrieve information from all existing databases. Currently there are 3 database, but it's not excluded in the future expansion of new offices. My question then is: What is the best strategy for a web application to retrieve information from different databases (which have the same schema) whereas the main objective performance data access and high fault tolerance? There are case studies in the literature that I can take as an example? Do you know some good documents to study? Do you have any tips to implement this task so efficient? Intuitively I would say that two possible strategy are: perform queries from different sources in real time and aggregate data on the fly; create a repository that contains the union of the entities of interest and perform queries directly on repository;

    Read the article

  • SQL SERVER – Introduction to Big Data – Guest Post

    - by pinaldave
    BIG Data – such a big word – everybody talks about this now a days. It is the word in the database world. In one of the conversation I asked my friend Jasjeet Sigh the same question – what is Big Data? He instantly came up with a very effective write-up.  Jasjeet is working as a Technical Manager with Koenig Solutions. He leads the SQL domain, and holds rich IT industry experience. Talking about Koenig, it is a 19 year old IT training company that offers several certification choices. Some of its courses include SharePoint Training, Project Management certifications, Microsoft Trainings, Business Intelligence programs, Web Design and Development courses etc. Big Data, as the name suggests, is about data that is BIG in nature. The data is BIG in terms of size, and it is difficult to manage such enormous data with relational database management systems that are quite popular these days. Big Data is not just about being large in size, it is also about the variety of the data that differs in form or type. Some examples of Big Data are given below : Scientific data related to weather and atmosphere, Genetics etc Data collected by various medical procedures, such as Radiology, CT scan, MRI etc Data related to Global Positioning System Pictures and Videos Radio Frequency Data Data that may vary very rapidly like stock exchange information Apart from difficulties in managing and storing such data, it is difficult to query, analyze and visualize it. The characteristics of Big Data can be defined by four Vs: Volume: It simply means a large volume of data that may span Petabyte, Exabyte and so on. However it also depends organization to organization that what volume of data they consider as Big Data. Variety: As discussed above, Big Data is not limited to relational information or structured Data. It can also include unstructured data like pictures, videos, text, audio etc. Velocity:  Velocity means the speed by which data changes. The higher is the velocity, the more efficient should be the system to capture and analyze the data. Missing any important point may lead to wrong analysis or may even result in loss. Veracity: It has been recently added as the fourth V, and generally means truthfulness or adherence to the truth. In terms of Big Data, it is more of a challenge than a characteristic. It is difficult to ascertain the truth out of the enormous amount of data and the one that has high velocity. There are always chances of having un-precise and uncertain data. It is a challenging task to clean such data before it is analyzed. Big Data can be considered as the next big thing in the IT sector in terms of innovation and development. If appropriate technologies are developed to analyze and use the information, it can be the driving force for almost all industrial segments. These include Retail, Manufacturing, Service, Finance, Healthcare etc. This will help them to automate business decisions, increase productivity, and innovate and develop new products. Thanks Jasjeet Singh for an excellent write up.  Jasjeet Sign is working as a Technical Manager with Koenig Solutions. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Big Data

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Tools to partition dynamic disk

    - by rangalo
    Hi, I have a 1 TB hard-disk with windows 7 pre-installed. I would like to delete, resize some partition and create ext3/swap partitions to install linux. Which tools are available for this ? I tried gparted, it doesn't work.

    Read the article

  • Rewrite a dynamic URL to a new dynamic URL

    - by Jmino14
    I am new to the RewriteEngine and have not been able to find an answer to the following issue. I run an ecommerce site with an ever changing catalog of product skus. Our URLs are dynamic. The question is, what if I want to have a dynamic variable redirect to a different dynamic variable. For instance, I want: http://www.mydomain.com/product.jhtm?id=12345 to now go to: www.mydomain.com/product.jhtm?id=78910 How can I do this through the .htaccess? Thanks in advance.

    Read the article

  • Ideal data structure/techniques for storing generic scheduler data in C#

    - by GraemeMiller
    I am trying to implement a generic scheduler object in C# 4 which will output a table in HTML. Basic aim is to show some object along with various attributes, and whether it was doing something in a given time period. The scheduler will output a table displaying the headers: Detail Field 1 ....N| Date1.........N I want to initialise the table with a start date and an end date to create the date range (ideally could also do other time periods e.g. hours but that isn't vital). I then want to provide a generic object which will have associated events. Where an object has events within the period I want a table cell to be marked E.g. Name Height Weight 1/1/2011 2/1/2011 3/1/20011...... 31/1/2011 Ben 5.11 75 X X X Bill 5.7 83 X X So I created scheduler with Start Date=1/1/2011 and end date 31/1/2011 I'd like to give it my person object (already sorted) and tell it which fields I want displayed (Name, Height, Weight) Each person has events which have a start date and end date. Some events will start and end outwith but they should still be shown on the relevant date etc. Ideally I'd like to have been able to provide it with say a class booking object as well. So I'm trying to keep it generic. I have seen Javasript implementations etc of similar. What would a good data structure be for this? Any thoughts on techniques I could use to make it generic. I am not great with generics so any tips appreciated.

    Read the article

  • .NET 4.0 Dynamic object used statically?

    - by Kevin Won
    I've gotten quite sick of XML configuration files in .NET and want to replace them with a format that is more sane. Therefore, I'm writing a config file parser for C# applications that will take a custom config file format, parse it, and create a Python source string that I can then execute in C# and use as a static object (yes that's right--I want a static (not the static type dyanamic) object in the end). Here's an example of what my config file looks like: // my custom config file format GlobalName: ExampleApp Properties { ExternalServiceTimeout: "120" } Python { // this allows for straight python code to be added to handle custom config def MyCustomPython: return "cool" } Using ANTLR I've created a Lexer/Parser that will convert this format to a Python script. So assume I have that all right and can take the .config above and run my Lexer/Parser on it to get a Python script out the back (this has the added benefit of giving me a validation tool for my config). By running the resultant script in C# // simplified example of getting the dynamic python object in C# // (not how I really do it) ScriptRuntime py = Python.CreateRuntime(); dynamic conf = py.UseFile("conftest.py"); dynamic t = conf.GetConfTest("test"); I can get a dynamic object that has my configuration settings. I can now get my config file settings in C# by invoking a dynamic method on that object: //C# calling a method on the dynamic python object var timeout = t.GetProperty("ExternalServiceTimeout"); //the config also allows for straight Python scripting (via the Python block) var special = t.MyCustonPython(); of course, I have no type safety here and no intellisense support. I have a dynamic representation of my config file, but I want a static one. I know what my Python object's type is--it is actually newing up in instance of a C# class. But since it's happening in python, it's type is not the C# type, but dynamic instead. What I want to do is then cast the object back to the C# type that I know the object is: // doesn't work--can't cast a dynamic to a static type (nulls out) IConfigSettings staticTypeConfig = t as IConfigSettings Is there any way to figure out how to cast the object to the static type? I'm rather doubtful that there is... so doubtful that I took another approach of which I'm not entirely sure about. I'm wondering if someone has a better way... So here's my current tactic: since I know the type of the python object, I am creating a C# wrapper class: public class ConfigSettings : IConfigSettings that takes in a dynamic object in the ctor: public ConfigSettings(dynamic settings) { this.DynamicProxy = settings; } public dynamic DynamicProxy { get; private set; } Now I have a reference to the Python dynamic object of which I know the type. So I can then just put wrappers around the Python methods that I know are there: // wrapper access to the underlying dynamic object // this makes my dynamic object appear 'static' public string GetSetting(string key) { return this.DynamicProxy.GetProperty(key).ToString(); } Now the dynamic object is accessed through this static proxy and thus can obviously be passed around in the static C# world via interface, etc: // dependency inject the dynamic object around IBusinessLogic logic = new BusinessLogic(IConfigSettings config); This solution has the benefits of all the static typing stuff we know and love while at the same time giving me the option of 'bailing out' to dynamic too: // the DynamicProxy property give direct access to the dynamic object var result = config.DynamicProxy.MyCustomPython(); but, man, this seems rather convoluted way of getting to an object that is a static type in the first place! Since the whole dynamic/static interaction world is new to me, I'm really questioning if my solution is optimal or if I'm missing something (i.e. some way of casting that dynamic object to a known static type) about how to bridge the chasm between these two universes.

    Read the article

  • Accessing SQL Data Services via ADO.NET Data Service Client Library

    - by Mehmet Aras
    Is this possible? Basically I would like to use SQL Data Services REST interface and let the ADO.NET Data Service Client library handle communication details and generate the entities that I can use. I looked at the samples in February release of Azure services kit but the samples in there are using HttpWebRequest and HttpWebResponse to consume SQL Data Services RESTfully. I was hoping to use ADO.NET Data Service Client library to abstract low-level details away.

    Read the article

  • Bridging Two Worlds: Big Data and Enterprise Data

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The big data world is all the vogue in today’s IT conversations. It’s a world of volume, velocity, variety – tantalizing us with its untapped potential. It’s a world of transformational game-changing technologies that have already begun to alter the information management landscape. One of the reasons that big data is so compelling is that it’s a universal challenge that impacts every one of us. Whether it is healthcare, financial, manufacturing, government, retail - big data presents a pressing problem for many industries: how can so much information be processed so quickly to deliver the ‘bigger’ picture? With big data we’re tapping into new information that didn’t exist before: social data, weblogs, sensor data, complex content, and more. What also makes big data revolutionary is that it turns traditional information architecture on its head, putting into question commonly accepted notions of where and how data should be aggregated processed, analyzed, and stored. This is where Hadoop and NoSQL come in – new technologies which solve new problems for managing unstructured data. And now for some worst practices that I'd recommend that you please not follow: Worst Practice Lesson 1: Throw away everything that you already know about data management, data integration tools, and start completely over. One shouldn’t forget what’s already running in today’s IT. Today’s Business Analytics, Data Warehouses, Business Applications (ERP, CRM, SCM, HCM), and even many social, mobile, cloud applications still rely almost exclusively on structured data – or what we’d like to call enterprise data. This dilemma is what today’s IT leaders are up against: what are the best ways to bridge enterprise data with big data? And what are the best strategies for dealing with the complexities of these two unique worlds? Worst Practice Lesson 2: Throw away all of your existing business applications … because they don’t run on big data yet. Bridging the two worlds of big data and enterprise data means considering solutions that are complete, based on emerging Hadoop technologies (as well as traditional), and are poised for success through integrated design tools, integrated platforms that connect to your existing business applications, as well as and support real-time analytics. Leveraging these types of best practices translates to improved productivity, lowered TCO, IT optimization, and better business insights. Worst Practice Lesson 3: Separate out [and keep separate] your big data sandboxes from all the current enterprise IT systems. Don’t mix sand among playgrounds. We didn't tell you that you wouldn't get dirty doing this. Correlation between the two worlds is key. The real advantage to analyzing big data comes when you can correlate it with the existing data in your data warehouse or your current applications to make sense of the larger patterns. If you have not followed these worst practices 1-3 then you qualify for the first step of our journey: bridging the two worlds of enterprise data and big data. Over the next several weeks we’ll be discussing this topic along with several others around big data as it relates to data integration. We welcome you to join us in the conversation by following us on twitter on #BridgingBigData or download our latest white paper and resource kit: Big Data and Enterprise Data: Bridging Two Worlds.

    Read the article

  • SQL SERVER – Data Sources and Data Sets in Reporting Services SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the Beginning SSRS. Supporting files are available with a free download from the www.Joes2Pros.com web site. Connecting to Your Data? When I was a child, the telephone book was an important part of my life. Maybe I was just a nerd, but I enjoyed getting a new book every year to page through to learn about the businesses in my small town or to discover where some of my school acquaintances lived. It was also the source of maps to my town’s neighborhoods and the towns that surrounded me. To make a phone call, I would need a telephone number. In order to find a telephone number, I had to know how to use the telephone book. That seems pretty simple, but it resembles connecting to any data. You have to know where the data is and how to interact with it. A data source is the connection information that the report uses to connect to the database. You have two choices when creating a data source, whether to embed it in the report or to make it a shared resource usable by many reports. Data Sources and Data Sets A few basic terms will make the upcoming choses make more sense. What database on what server do you want to connect to? It would be better to just ask… “what is your data source?” The connection you need to make to get your reports data is called a data source. If you connected to a data source (like the JProCo database) there may be hundreds of tables. You probably only want data from just a few tables. This means you want to write a specific query against this data source. A query on a data source to get just the records you need for an SSRS report is called a Data Set. Creating a local Data Source You can connect embed a connection from your report directly to your JProCo database which (let’s say) is installed on a server named Reno. If you move JProCo to a new server named Tampa then you need to update the Data Set. If you have 10 reports in one project that were all pointing to the JProCo database on the Reno server then they would all need to be updated at once. It’s possible to make a project level Data Source and have each report use that. This means one change can fix all 10 reports at once. This would be called a Shared Data Source. Creating a Shared Data Source The best advice I can give you is to create shared data sources. The reason I recommend this is that if a database moves to a new server you will have just one place in Report Manager to make the server name change. That one change will update the connection information in all the reports that use that data source. To get started, you will start with a fresh project. Go to Start > All Programs > SQL Server 2012 > Microsoft SQL Server Data Tools to launch SSDT. Once SSDT is running, click New Project to create a new project. Once the New Project dialog box appears, fill in the form, as shown in. Be sure to select Report Server Project this time – not the wizard. Click OK to dismiss the New Project dialog box. You should now have an empty project, as shown in the Solution Explorer. A report is meant to show you data. Where is the data? The first task is to create a Shared Data Source. Right-click on the Shared Data Sources folder and choose Add New Data Source. The Shared Data Source Properties dialog box will launch where you can fill in a name for the data source. By default, it is named DataSource1. The best practice is to give the data source a more meaningful name. It is possible that you will have projects with more than one data source and, by naming them, you can tell one from another. Type the name JProCo for the data source name and click the Edit button to configure the database connection properties. If you take a look at the types of data sources you can choose, you will see that SSRS works with many data platforms including Oracle, XML, and Teradata. Make sure SQL Server is selected before continuing. For this post, I am assuming that you are using a local SQL Server and that you can use your Windows account to log in to the SQL Server. If, for some reason you must use SQL Server Authentication, choose that option and fill in your SQL Server account credentials. Otherwise, just accept Windows Authentication. If your database server was installed locally and with the default instance, just type in Localhost for the Server name. Select the JProCo database from the database list. At this point, the connection properties should look like. If you have installed a named instance of SQL Server, you will have to specify the server name like this: Localhost\InstanceName, replacing the InstanceName with whatever your instance name is. If you are not sure about the named instance, launch the SQL Server Configuration Manager found at Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools. If you have a named instance, the name will be shown in parentheses. A default instance of SQL Server will display MSSQLSERVER; a named instance will display the name chosen during installation. Once you get the connection properties filled in, click OK to dismiss the Connection Properties dialog box and OK again to dismiss the Shared Data Source properties. You now have a data source in the Solution Explorer. What’s next I really need to thank Kathi Kellenberger and Rick Morelan for sharing this material for this 5 day series of posts on SSRS. To get really comfortable with SSRS you will get to know the different SSDT windows, Build reports on your own (without the wizards),  Add report headers and footers, Accept user input,  create levels, charts, or even maps for visual appeal. You might be surprise to know a small 230 page book starts from the very beginning and covers the steps to do all these items. Beginning SSRS 2012 is a small easy to follow book so you can learn SSRS for less than $20. See Joes2Pros.com for more on this and other books. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • dynamic urls and links on one web page

    - by John
    I am trying to figure out how to create dynamic links and urls on a static webpage. What I want to do is the following: I have a single webpage for example: MYWEBPAGEdotCOM/INDEX.HTML that will always look the same, except for one link on the page. the link would be on the page for example: LINK TO AFFILIATE: affiliatedotCOM/my-affiliate_code_here_DYNAMIC_REFERER the only thing would change is the "DYNAMIC_REFERER" with every dynamic url on this page: MYWEBPAGEdotCOM/INDEX.PHP_id=test1 MYWEBPAGEdotCOM/INDEX.PHP_id=test2 MYWEBPAGEdotCOM/INDEX.PHP_id=test3 MYWEBPAGEdotCOM/INDEX.PHP_id=test4 which would only hange the dynamic link on the page to: affiliatedotCOM/my-affiliate_code_here_test1 affiliatedotCOM/my-affiliate_code_here_test2 affiliatedotCOM/my-affiliate_code_here_test3 affiliatedotCOM/my-affiliate_code_here_test4 Can someone tell me how I could go about doing this? I just dont want to have to make 100's of pages, as this would prevent me from having to do so.

    Read the article

  • dynamic urls and links on one web page

    - by John
    I am trying to figure out how to create dynamic links and urls on a static webpage. What I want to do is the following: I have a single webpage for example: MYWEBPAGEdotCOM/INDEX.HTML that will always look the same, except for one link on the page. the link would be on the page for example: LINK TO AFFILIATE: affiliatedotCOM/my-affiliate_code_here_DYNAMIC_REFERER the only thing would change is the "DYNAMIC_REFERER" with every dynamic url on this page: MYWEBPAGEdotCOM/INDEX.PHP_id=test1 MYWEBPAGEdotCOM/INDEX.PHP_id=test2 MYWEBPAGEdotCOM/INDEX.PHP_id=test3 MYWEBPAGEdotCOM/INDEX.PHP_id=test4 which would only hange the dynamic link on the page to: affiliatedotCOM/my-affiliate_code_here_test1 affiliatedotCOM/my-affiliate_code_here_test2 affiliatedotCOM/my-affiliate_code_here_test3 affiliatedotCOM/my-affiliate_code_here_test4 Can someone tell me how I could go about doing this? I just dont want to have to make 100's of pages, as this would prevent me from having to do so.

    Read the article

  • xen 4.1 host priodically dropping network packets of domU

    - by Dyutiman Chakraborty
    I have xen 4.1 Host running on a ubuntu 12.04 LTS Server with ip 153.x.x.54. I have setup 2 VMs on it, namely, "dev.mydomain.com" and "web.mydomain.com" with ips 195.X.X.2 and 195.x.x.3 respectively. For network the VMs connect through xendbr0 (xen-bridge), and can accces the network properly. I can also login to the VMs with ssh with no issue. However when I ping any of the VMs, there is a high amount of periodic packet drop. If I the ping the xen host (dom0) there is no packet drop. Following is a output of "tcpdump | grep ICMP" on dOM0 while I was pinging one of the domU tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 05:19:55.682493 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 30, length 64 05:19:56.691144 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 31, length 64 05:19:57.698776 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 32, length 64 05:19:58.706784 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 33, length 64 05:19:59.714751 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 34, length 64 05:20:00.723144 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 35, length 64 05:20:01.730349 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 36, length 64 05:20:02.739017 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 37, length 64 05:20:03.746806 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 38, length 64 05:20:06.770326 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 41, length 64 05:20:07.778801 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 42, length 64 05:20:08.786481 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 43, length 64 05:20:09.794720 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 44, length 64 05:20:10.802395 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 45, length 64 05:20:11.810770 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 46, length 64 05:20:12.818511 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 47, length 64 05:20:13.826817 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 48, length 64 05:20:14.835125 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 49, length 64 05:20:15.842138 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 50, length 64 05:20:18.274072 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 1, length 64 05:20:19.282347 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 2, length 64 05:20:20.290746 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 3, length 64 05:20:21.297910 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 4, length 64 05:20:22.305656 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 5, length 64 05:20:23.314369 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 6, length 64 05:20:24.322055 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 7, length 64 05:20:25.329782 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 8, length 64 05:20:26.338473 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 9, length 64 05:20:27.346411 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 10, length 64 05:20:28.354175 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 11, length 64 05:20:29.361640 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 12, length 64 05:20:30.370026 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 13, length 64 05:20:31.377696 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 14, length 64 05:20:32.386151 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 15, length 64 05:20:33.394118 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 16, length 64 05:20:34.402058 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 17, length 64 05:20:35.409002 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 18, length 64 05:20:36.417692 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 19, length 64 05:20:36.496916 IP6 fe80::3285:a9ff:feec:fc69 > ip6-allnodes: HBH ICMP6, multicast listener querymax resp delay: 1000 addr: ::, length 24 05:20:36.499112 IP6 fe80::21c:c0ff:fe6c:c091 > ff02::1:ff6c:c091: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff6c:c091, length 24 05:20:36.507041 IP6 fe80::227:eff:fe11:fa3f > ff02::1:ff00:2: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff00:2, length 24 05:20:36.523919 IP6 fe80::21c:c0ff:fe77:6257 > ff02::1:ff77:6257: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff77:6257, length 24 05:20:36.544785 IP6 fe80::54:ff:fe12:ea9a > ff02::1:ff12:ea9a: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff12:ea9a, length 24 05:20:36.581740 IP6 fe80::5604:a6ff:fef1:6da7 > ff02::1:fff1:6da7: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:fff1:6da7, length 24 05:20:36.600103 IP6 fe80::8a8:8aa0:5e18:917a > ff02::1:ff18:917a: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff18:917a, length 24 05:20:36.601989 IP6 fe80::227:eff:fe11:fa3e > ff02::1:ff11:fa3e: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff11:fa3e, length 24 05:20:36.611090 IP6 fe80::dcad:56ff:fe57:3bbe > ff02::1:ff57:3bbe: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff57:3bbe, length 24 05:20:36.660521 IP6 fe80::54:ff:fe02:1d31 > ff02::1:ff00:6: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff00:6, length 24 05:20:36.698871 IP6 fe80::21e:8cff:feb4:9f89 > ff02::1:ffb4:9f89: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ffb4:9f89, length 24 05:20:36.776548 IP6 fe80::54:ff:fe12:ea9a > ff02::1:ff01:7: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff01:7, length 24 05:20:36.781910 IP6 fe80::54:ff:fe8f:6dd > ff02::1:ff00:3: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff00:3, length 24 05:20:36.865475 IP6 fe80::21c:c0ff:fe4a:ae9f > ff02::1:ff4a:ae9f: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff4a:ae9f, length 24 05:20:36.908333 IP6 fe80::dcad:45ff:fe90:84db > ff02::1:ff90:84db: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff90:84db, length 24 05:20:36.919653 IP6 fe80::54:ff:fe12:ea9a > ff02::1:ff00:7: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff00:7, length 24 05:20:36.924276 IP6 fe80::59a2:2a4a:2082:6dee > ff02::1:ff82:6dee: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff82:6dee, length 24 05:20:37.001905 IP6 fe80::54:ff:fe8f:6dd > ff02::1:ff8f:6dd: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff8f:6dd, length 24 05:20:37.042403 IP6 fe80::54:ff:fe95:54f2 > ff02::1:ff95:54f2: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff95:54f2, length 24 05:20:37.090992 IP6 fe80::21c:c0ff:fe77:62ac > ff02::1:ff77:62ac: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff77:62ac, length 24 05:20:37.098118 IP6 fe80::d63d:7eff:fe01:b67f > ff02::1:ff01:b67f: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff01:b67f, length 24 05:20:37.118784 IP6 fe80::54:ff:fe12:ea9a > ff02::202: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::202, length 24 05:20:37.168548 IP6 fe80::54:ff:fe02:1d31 > ff02::1:ff02:1d31: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff02:1d31, length 24 05:20:41.743286 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 1, length 64 05:20:41.743542 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 1, length 64 05:20:42.743859 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 2, length 64 05:20:42.743952 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 2, length 64 05:20:43.745689 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 3, length 64 05:20:43.745777 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 3, length 64 05:20:44.746706 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 4, length 64 05:20:44.746796 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 4, length 64 05:20:45.747986 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 5, length 64 05:20:45.748082 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 5, length 64 05:20:46.749834 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 6, length 64 05:20:46.749920 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 6, length 64 05:20:47.750838 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 7, length 64 05:20:47.751182 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 7, length 64 05:20:48.751909 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 8, length 64 05:20:48.751991 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 8, length 64 05:20:49.752542 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 9, length 64 05:20:49.752620 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 9, length 64 05:20:50.754246 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 10, length 64 05:20:51.753856 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 11, length 64 05:20:52.752868 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 12, length 64 05:20:53.754174 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 13, length 64 05:20:54.753972 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 14, length 64 05:20:55.753814 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 15, length 64 05:20:56.753391 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 16, length 64 05:20:57.753683 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 17, length 64 05:20:58.753487 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 18, length 64 05:20:59.754013 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 19, length 64 05:21:00.753169 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 20, length 64 05:21:01.753757 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 21, length 64 05:21:02.753307 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 22, length 64 05:21:03.753021 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 23, length 64 05:21:04.753628 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 24, length 64 ^C479 packets captured 718 packets received by filter 238 packets dropped by kernel 3 packets dropped by interface You see the ping request is not responed to initially, then for a moment it is replied back and then again no reply. I have tried everything (to the best of my knowledge) to fix this, but can't find any answer Any help will be greatly appreciated Thanks.

    Read the article

  • Dynamic Class Inheritance For PHP

    - by VirtuosiMedia
    I have a situation where I think I might need dynamic class inheritance in PHP 5.3, but the idea doesn't sit well and I'm looking for a different design pattern to solve my problem if it's possible. Use Case I have a set of DB abstraction layer classes that dynamically compiles SQL queries, with one DAL class for each DB type (MySQL, MsSQL, Oracle, etc.). Each table in the database has its own class that extends the appropriate DAL class. The idea is that you interact with the table classes, but never directly use the DAL class. If you want to support a different DB type for your app, you don't need to rewrite any queries or even any code, you simply change a setting that swaps one DAL class out for another...and that's it. To give you a better idea of how this is used, you can take a look at the DAL class, the table classes, and how they are used on this StackExchange Code Review page. To really understand what I'm trying to do, please take a look at my implementation first before suggesting a solution. Issues The strategy that I had used previously was to have all of the DAL classes share the same class name. This eliminated autoloading, so I had to manually load the appropriate DAL class in a switch statement. However, this approach presents some problems for testing and documentation purposes, so I'd like to find a different way to solve the problem of loading the correct DAL class more elegantly. Update to clarify the issue The problem basically boils down to inconsistencies in the class name (pre-PHP 5.3) or class namespace (PHP 5.3) and its location in the directory structure. At this point, all of my DAL classes have the same name, DBObject, but reside in different folders, MySQL, Oracle, etc. My table classes all extend DBObject, but which DBObject they extend varies depending on which one has been loaded. Basically, I'm trying to have my cake and eat it too. The table classes act as a stable API and extend a dynamic backend, the DAL (DBObject) classes. It works great, but I outsmarted myself and because of the inconsistencies with the class names and their locations, I can't autoload the DBObject, which makes running unit tests and generating API docs impossible for the DBObject classes because the tests and docs rely on auto-loading. Just loading the appropriate DBObject into memory using a factory method won't work because there will be times when I need to load multiple DBObjects for testing. Because the classes currently share a name, this causes a class is already defined error. I can make exceptions for the DBObjects in my test code, obviously, but I'm looking for something a little less hacky as there may future instances where something similar would need to be done. Solutions? Worst case scenario, I can continue my current strategy, but I don't like it very much, especially as I'll soon be converting my code to PHP 5.3. I suspect that I can use some sort of dynamic inheritance via either namespaces (preferred) or a dynamic class extension, but I haven't been able to find good examples of this implemented in the wild. In your answers, please suggest either an alternate pattern that would work for this use case or an example of dynamic inheritance done right. Please assume PHP 5.3 with namespaced code. Any code examples are greatly encouraged. The preferred constraints for the solution are: DAL class can be autoloaded. DAL classes don't share the same exact same namespace, but share the same class name. As an example, I would prefer to use classes named DbObject that use namespaces like Vm\Db\MySql and Vm\Db\Oracle. Table classes don't have to be rewritten with a change in DB type. The appropriate DB type is determined via a single setting only. That setting is the only thing that should need to change to interchange DB types. Ideally, the setting check should occur only once per page load, but I'm flexible on that.

    Read the article

  • Does Java support dynamic method invocation?

    - by eSKay
    class A { void F() { System.out.println("a"); }} class B extends A { void F() { System.out.println("b"); }} public class X { public static void main(String[] args) { A objA = new B(); objA.F(); } } Here, F() is being invoked dynamically, isn't it? This article says ... the Java bytecode doesn’t support dynamic method invocation. There are three supported invocations modes : invokestatic, invokespecial, invokeinterface or invokevirtual. These modes allows to call methods with known signature. We talk of strongly typed language. This allows to to make some checks directly at compile time. On the other side, the dynamic languages use dynamic types. So we can call a method unknown at the compile time, but that’s completely impossible with the Java bytecode. What am I missing?

    Read the article

  • Debugging dynamic sql + dynamic tables in MS SQL Server 2008.

    - by Hamish Grubijan
    Hi, I have a messy stored procedure which uses dynamic sql. I can debug it in runtime by adding print @sql; where @sql; is the string containing the dynamic SQL, right before I call execute (@sql);. Now, the multi-page stored procedure also creates dynamic tables and uses them in a query. I want to print those tables to the console right before I do an execute, so that I know exactly what the query is trying to do. However, the SQL Server 08 does not like that. When I try: print #temp_table; and try to compile the S.P. I get this error: The name "#temp_table" is not permitted in this context. Valid expressions are constants, constant expressions, and (in some contexts) variables. Column names are not permitted. Please help.

    Read the article

  • Recover data from Dynamic Disk (MBR) bigger than 2TB

    - by Helder
    Here is the situation: Promise Array FastTrak TX4310 with 3 disks (750 GB each) in RAID5. This comes to around 1500 GB of data. Last week I had the idea of expanding the RAID with an additional 750 GB disk. This would bring the volume to around 2250 GB. I plugged the disk and used the Webpam software to do the RAID expansion. However, I didn't count with the MBR 2TB limit, as I didn't remembered that the disk was using MBR instead of GPT and I didn't check it prior to the expansion. After a couple of days of expansion, today when I got home, the disk in Windows disk manager showed the message "Invalid disk" and when I try to activate it, it says "The operation is not allowed on the Invalid pack". From what I figured, the logical volume on the RAID expanded, and passed that info to the Windows layer and I ended up with an "larger than 2TB" MBR disk. I'm hopping that somehow I can still recover some data from this, and I was wondering if I can "rewrite" the MBR structure back to the 1500 GB partition size, so I can access the partition in Windows. Right now I'm doing an "Analyse" with TestDisk, as I hope the program will pickup the old 1500 structure and allow me to somehow revert back to it. I think that even though the Logical Drive in the RAID is bigger than the 2TB, I can somehow correct the MBR to show the 1500 GB partition again. I had a similar problem once, and I was able to recover the data using a similar method. What do you guys think? Is it a dead end? Am I totally screwed because there is the extra RAID layer that I'm not counting? Or is there other way to move with this? Thanks all!

    Read the article

  • PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data

    - by belvoir
    Background: I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP. I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse. How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5. Best Solution? From what I can piece together the most practical solution is as follows: Create a TRIGGER to write all DML activity to a rotating CSV log file Perform whatever transformations are required Use the native DW data pump tool to efficiently pump the transformed CSV into the DW Why this approach? TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable CSV easy and fast to transform Easy to pump CSV into the DW Alternatives considered .... Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log) Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner Using the WAL Has anyone done this before? Want to share your thoughts?

    Read the article

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • dynamic, How to test if a property is available

    - by Miau
    Scenario is very simple somewhere in the code I have this dynamic myVariable = GetDataThatLooksVerySimilarButNotTheSame(); //how to do this? if (myVariable.MyProperty.Exists) //Do stuff So basically the question is how to check (avoiding exceptions) that a certain property is available in my dynamic variable. I could do GetType() but I d rather avoid that, I dont actually want to know the type of the object I want to know if a property (or method if that makes life easier) is available Any pointers? Cheers

    Read the article

  • Focus on Oracle Data Profiling and Data Quality 11g - 24/Fev/11

    - by Claudia Costa
    Thursday 24th February, 11am GMTOracle offers an integrated suite Data Quality software architected to discover and correct today's data quality problems and establish a platform prepared for tomorrow's yet unknown data challenges.Oracle Data Profiling provides data investigation, discovery, and profiling in support of quality, migration, integration, stewardship, and governance initiatives. It includes a broad range of features that expand upon basic profiling, including automated monitoring, business-rule validation, and trend analysis.Oracle Data Quality for Data Integrator provides cleansing, standardization, matching, address validation, location enrichment, and linking functions for global customer data and operational business data.It ensures that data adheres to established standards that are adaptable to fit each organization's specific needs. Both single - and double - byte data are processed in local languages to provide a unique and centralized view of customers, products and services.  During this in-person briefing, Data Integration Solution Specialists will be providing a technical overview and a walkthrough.Agenda Oracle Data Integration Strategy overview A focus on Oracle Data Profiling and Oracle Data Quality for Data Integrator: Oracle Data Profiling Oracle Data Quality for Data Integrator Live demo Q&A  This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Registrations received less than 24hours prior to start time may not receive confirmation to attend.To register click here.For any questions please contact [email protected]

    Read the article

  • New insights I can learn from the Groovy language

    - by Andrea
    I realize that, for a programmer coming from the Java world, Groovy contains a lot of new ideas and cool tricks. My situation is different, as I am learning Groovy coming from a dynamic background, mainly Python and Javascript. When learning a new language, I find that it helps me if I know beforehand which features are more or less old acquaintances under a new syntax and which ones are really new, so that I can concentrate on the latter. So I would like to know which traits distinguish Groovy among the dynamic languages. What are the ideas and insights that a programmer well-versed in dynamic languages should pay attention to when learning Groovy?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >