Search Results

Search found 44 results on 2 pages for 'niraj doshi'.

Page 1/2 | 1 2  | Next Page >

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to create piano UI in Iphone?

    - by Niraj
    Hi, I am newbei in creating such piano based UI application. I am developing an application in which i have to use Piano. How to create piano UI in Iphone? How to create it using Interface builder ? or is there any other way to create the piano? I am in urgent need to create such UI,If any body has any solution or any useful code or any Link,which would be appreciated. Thanks, Niraj

    Read the article

  • Cant install .NET application in Clients PC

    - by Niraj Doshi
    Hello all, My client's PC runs Windows 7 Ultimate with .netframework 4 client profile. I am unable to install my application developed in VS2008. I tried uninstalling .NET Framework 4 From his PC and running the Clean up tool provided by Microsoft. But still I am unable to install it successfully. It provides Error 1001. I tried running the program as administrator. I also tried to Turn on .net 3.5 feature from add or remove program. Thanks in advance. :) Edit: The error what i get is shown here. Furthermore, I have confirmed that it is a 32bit processor and i run x86 release version of setup The application is developed in a Windows 7 OS with .NET Framework 3.5 I have installed this application in 7 PCs which have .NET 3.5 installed in them and having OS Windows XP,Vista and Windows 7; and all are working fine. In clients PC, when I try to install .NET 3.5 again, the installer starts but then it disappears suddenly without doing anything I have tried turning on .NET 3.5 framework feature from control panel Program and Features. I have tried running the program as Administrator I have tried setting the application setup in Windows XP and Vista compatible mode. But still the issue persists. Thanks :)

    Read the article

  • Why fill() and copy() of Collections in java is implemented this way

    - by Priyank Doshi
    According to javadoc... Collections.fill() is written as below : public static <T> void fill(List<? super T> list, T obj) { int size = list.size(); if (size < FILL_THRESHOLD || list instanceof RandomAccess) { for (int i=0; i<size; i++) list.set(i, obj); } else { ListIterator<? super T> itr = list.listIterator(); for (int i=0; i<size; i++) { itr.next(); itr.set(obj); } } } Its easy to understand why they didn't use listIterator for if (size < FILL_THRESHOLD || list instanceof RandomAccess) condition as of RandomAccess. But whats the use of size < FILL_THRESHOLD in above? I mean is there any significant performance benefit over using iterator for size>=FILL_THRESHOLD and not for size < FILL_THRESHOLD ? I see the same approach for Collections.copy() also : public static <T> void copy(List<? super T> dest, List<? extends T> src) { int srcSize = src.size(); if (srcSize > dest.size()) throw new IndexOutOfBoundsException("Source does not fit in dest"); if (srcSize < COPY_THRESHOLD || (src instanceof RandomAccess && dest instanceof RandomAccess)) { for (int i=0; i<srcSize; i++) dest.set(i, src.get(i)); } else { ListIterator<? super T> di=dest.listIterator(); ListIterator<? extends T> si=src.listIterator(); for (int i=0; i<srcSize; i++) { di.next(); di.set(si.next()); } } } FYI: private static final int FILL_THRESHOLD = 25; private static final int COPY_THRESHOLD = 10;

    Read the article

  • httpd service keep restarting. after 15-20 mins

    - by niraj
    I have recently purchased Dedicated Server which has 16bg ram and 1TB Harddisk. It has Cpanel and for firewall CSF Installd. I am mainly going to install it for File hosting service. Now the day i moved my httpd service keep restarting every 15-20 mins. It becomes unresponsive after that so have to manually restart it. My httpd settings are Start Servers = 5 Minimum Spare Servers = 5 Maximum Spare Servers = 10 Server Limit = 20000 Max Clients = 10000 Max Requests Per Child = 10000 Keep-Alive = On Keep-Alive Timeout = 5 Max Keep-Alive Requests = Unlimited Timeout 300 TOP is top - 14:53:41 up 1 day, 23:39, 2 users, load average: 0.10, 0.14, 0.09 Tasks: 1563 total, 1 running, 1562 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 0.6%sy, 0.0%ni, 98.1%id, 0.2%wa, 0.0%hi, 0.5%si, 0.0%st Mem: 16303780k total, 16142048k used, 161732k free, 135264k buffers Swap: 8224760k total, 868k used, 8223892k free, 14136616k cached Please help me in this its keep happning.

    Read the article

  • Serial Mac OS X constantly freezes/locks/dissappears for USB to Arduino

    - by Niraj D
    I have a problem with my C++ code running in Xcode with both the AMSerial library as well as the generic C (ioctl, termios). After a fresh restart, my application works well but after I "kill" the program the Serial (I think) is not released. I have checked my open files under /dev and have killed the connection to serial USB from there, but my C++ still can't open the USB port. I have narrowed this down to being a low level Mac OS X issue, regarding blocking the port indefinitely, regardless of closing it using the aforementioned libraries. Just for context, I'm trying to send numbers through my USB port, serially to an Arduino Duemilanove at 9600 baud. Running Serial Monitor in Arduino is perfectly fine, however, running through a C++ application it freezes up my computer, occasionally, my mouse/keyboard freeze up: requiring a hard reset. How can this problem be fixed? It seems like Mac OS X is not USB friendly!

    Read the article

  • design site map page

    - by Niraj Choubey
    I need to design a site map page of my website in asp.net. I am confuse whether to use custom site map and reterive information from databse or direct reterive information from database and bind it to hyperlink?Please help.

    Read the article

  • Subversion (20014)Internal error: database is locked on NFS

    - by Niraj Gurjar
    i have subversion setup using apache and DAV. OS is RHEL 4. Repository is created on NFS server mounted on this machine. when i try to access this repository i get following error in apache logs (20014)Internal error: database is locked Could not fetch resource information. [500, #0] Could not open the requested SVN filesystem [500, #200030] Could not open the requested SVN filesystem [500, #200030] The URI does not contain the name of a repository. [403, #190001] i did 'chmod' on that mounted partition but problem still persists. any help?

    Read the article

  • Problem in file upload

    - by Niraj CHoubey
    whenever i am uploading trying to upload file having size more than the size specified in maxRequestLength , browser is showing "webpage can not be displayed" . an someone please tell me how to solve this problem

    Read the article

1 2  | Next Page >