Search Results

Search found 58486 results on 2340 pages for 'data integrator'.

Page 708/2340 | < Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >

  • Assessing Relative Maintainability

    - by João Bragança
    We (a contractor, actually) are implementing an off the shelf system to replace a legacy homegrown system for the core domain of the company (designing widgets). Unfortunately both systems will have to run concurrently for some time, as the product just isn't ready yet. Also, the decision was made to only migrate some of the widgets from the legacy system, based on date of last sale activity. Later on a new requirement came down: certain people in the company, most of them outside of the widget development context, want to search all widgets. The search results screen has 3 pieces of data: a GUID, a human readable id that is searchable, and a brief description (may need to be searchable in the future). In the widget details, there will be multiple screens. These screens align very well along SOA / bounded context lines - a screen for marketing data, a screen for sales history, etc. UML ahead! I am probably using the wrong kind of arrows here so please forgive me. The current solution - which is not in production yet - is something like the following: Both systems will be queried and the controller will merge the results. The new system has its own proprietary query language (we've alleviated this a bit with a LINQ provider). It also puts a lot of data on the wire. 15 search results typically run about 60k of unintelligible SOAP-wrapped xml. So I would prefer to avoid querying this system directly. These two systems publish events to help us integrate with other systems, mainly an ERP system. One of these events contains all the data necessary for the search screen. I proposed the following alternative: However I am being told that 'adding another database' will create more maintenance down the road. However, I believe this to be false, as I had to add a relatively simple feature that took several hours longer than anticipated because of this merging code. I want to get a feel for which system is more maintainable in the long run. I personally have not had the burden of maintaining any large system. I want something more than my gut. Specifically I'd like to know if having more, specialized physical databases is more or less maintainable than having less larger physical databases.

    Read the article

  • Speaking About SQL Server

    - by AllenMWhite
    There's a lot of excitement in the SQL Server world right now, with the RTM (Release to Manufacturing) release of SQL Server 2012 , and the availability of SQL Server Data Tools (SSDT) . My personal speaking schedule has exploded as well. Just this past Saturday I presented a session called Gather SQL Server Performance Data with PowerShell . There are a lot of events coming up, and I hope to see you at one or more of them. Here's a list of what's scheduled so far: First, I'll be presenting a session...(read more)

    Read the article

  • What the best way to achieve RPO of zero and lowest possible RTO (less than 15 minutes) with SQL 2008 R2?

    - by Adrian Hope-Bailie
    We are running a payments (EFT transaction processing) application which is processing high volumes of transactions 24/7 and are currently investigating a better way of doing DB replication to our disaster recovery site. Our current and previous strategies have included using both DoubleTake and Redgate to replicate data to a warm stand-by. DoubleTake is the supported solution from the payments software vendor however their (DoubleTake's) support in South Africa is very poor. We had a few issues and simply couldn't ever resolve them so we had to give up on DoubleTake. We have been using Redgate to manually read the data from the primary site (via queries) and write to the DR site but this is: A bad solution Getting the software vendor hot and bothered whenever we have support issues as it has a tendency to interfere with the payment application which is very DB intensive. We recently upgraded the whole system to run on SQL 2008 R2 Enterprise which means we should probably be looking at using some of the built-in replication features. The server has 2 fairly large databases with a mixture of tables containing highly volatile transactional data and pretty static configuration data. Replication would be done over a WAN link to a separate physical site and needs to achieve the following objectives. RPO: Zero loss - This is transactional data with financial impact so we can't lose anything. RTO: Tending to zero - The business depends on our ability to process transactions every minute we are down we are losing money I have looked at a few of the other questions/answers but none meet our case exactly: SQL Server 2008 failover strategy - Log shipping or replication? How to achieve the following RTO & RPO with logshipping only using SQL Server? What is the best of two approaches to achieve DB Replication? My current thinking is that we should use mirroring but I am concerned that for RPO:0 we will need to do delayed commits and this could impact the performance of the primary DB which is not an option. Our current DR process is to: Stop incoming traffic to the primary site and allow all in-flight transaction to complete. Allow the replication to DR to complete. Change network routing to route to DR site. Start all applications and services on the secondary site (Ideally we can change this to a warmer stand-by whereby the applications are already running but not processing any transactions). In other words the DR database needs to, as quickly as possible, catch up with primary and be ready for processing as the new primary. We would then need to be able to reverse this when we are ready to switch back. Is there a better option than mirroring (should we be doing log-shipping too) and can anyone suggest other considerations that we should keep in mind?

    Read the article

  • A little gem from MPN&ndash;FREE online course on Architectural Guidance for Migrating Applications to Windows Azure Platform

    - by Eric Nelson
    I know a lot of technical people who work in partners (ISVs, System Integrators etc). I know that virtually none of them would think of going to the Microsoft Partner Network (MPN) learning portal to find some deep and high quality technical content. Instead they would head to MSDN, Channel 9, msdev.com etc. I am one of those people :-) Hence imagine my surprise when i stumbled upon this little gem Architectural Guidance for Migrating Applications to Windows Azure Platform (your company and hence your live id need to be a member of MPN – which is free to join). This is first class stuff – and represents about 4 hours which is really 8 if you stop and ponder :) Course Structure The course is divided into eight modules.  Each module explores a different factor that needs to be considered as part of the migration process. Module 1:  Introduction:  This section provides an introduction to the training course, highlighting the values of the Windows Azure Platform for developers. Module 2:  Dynamic Environment: This section goes into detail about the dynamic environment of the Windows Azure Platform. This session will explain the difference between current development states and the Windows Azure Platform environment, detail the functions of roles, and highlight development considerations to be aware of when working with the Windows Azure Platform. Module 3:  Local State: This session details the local state of the Windows Azure Platform. This section details the different types of storage within the Windows Azure Platform (Blobs, Tables, Queues, and SQL Azure). The training will provide technical guidance on local storage usage, how to write to blobs, how to effectively use table storage, and other authorization methods. Module 4:  Latency and Timeouts: This session goes into detail explaining the considerations surrounding latency, timeouts and how to assess an IT portfolio. Module 5:  Transactions and Bandwidth: This session details the performance metrics surrounding transactions and bandwidth in the Windows Azure Platform environment. This session will detail the transactions and bandwidth costs involved with the Windows Azure Platform and mitigation techniques that can be used to properly manage those costs. Module 6:  Authentication and Authorization: This session details authentication and authorization protocols within the Windows Azure Platform. This session will detail information around web methods of authorization, web identification, Access Control Benefits, and a walkthrough of the Windows Identify Foundation. Module 7:  Data Sensitivity: This session details data considerations that users and developers will experience when placing data into the cloud. This section of the training highlights these concerns, and details the strategies that developers can take to increase the security of their data in the cloud. Module 8:  Summary Provides an overall review of the course.

    Read the article

  • TechEd 2010 Followup

    - by AllenMWhite
    Last week I presented a couple of sessions at Tech Ed NA in New Orleans. It was a great experience, even though my demos didn't always work out as planned. Here are the sessions I presented: DAT01-INT Administrative Demo-Fest for SQL Server 2008 SQL Server 2008 provides a wealth of features aimed at the DBA. In this demofest of features we'll see ways to make administering SQL Server easier and faster such as Centralized Data Management, Performance Data Warehouse, Resource Governor, Backup Compression...(read more)

    Read the article

  • Why do we need a format for binary executable files

    - by user3671483
    When binary files (i.e. executables) are saved they usually have a format (e.g. ELF or .out) where we have a header containing pointers to where data or code is stored inside the file. But why don't we store the binary files directly in the form of sequence of machine instructions.Why do we need to store data separately from the code?Secondly when the assembler creates a binary file is the file is among the above formats?

    Read the article

  • Sybase IQ 15.4 annoncé : Sybase parie sur Hadoop et MapReduce, et défie sa maison mère ?

    Sybase IQ 15.4 annoncé pour fin novembre Sybase veut repousser les limites du Big Data avec Hadoop et MapReduce Alors que la grand messe annuelle de SAP, le SAPPHIRE NOW, battait son plein, la nouvelle filiale de l'éditeur allemand Sybase a annoncé en totale indépendance la sortie de Sybase IQ 15.4, son serveur analytique haute performance structuré en colonnes pour gérer les "big data". Alors que de son côté SAP met en avant HANA, sa nouvelle technologie de mise en cache des données (ou "In-Memory Computing") pour accélérer la vitesse de traite...

    Read the article

  • Spicing Up Your Web Services with XSLT

    The first thirteen parts of this series introduced some of the many features available within the IBM Data Studio integrated development environment (IDE) that's available for use with the IBM data servers. This installment explains how to apply Extensible Stylesheet Language Transformations (XSLT) to your Web services.

    Read the article

  • Filling a Grid with Files in a Folder - C#

    This code sample shows, in C#, how to get all the files within a specific folder, and list them all, including the file size, in a Gridview control. To access the FileSystem, you’ll need to import the System.IO namespace, and to use a DataTable, you must import the System.Data namespace: using System.IO; using System.Data;

    Read the article

  • Google Analytics on Demo Site

    - by Josh Smith
    Will adding the UA code of the live site to a revision site affect anything adversely? They are, technically, two different sites with different metrics. I don't want to lose the old data when I initiate the new site, of course. I would also like to work on setting up the new analytics page while the revision site is in development. Does anyone have any good workflows on setting up a revision site without losing old site data?

    Read the article

  • ASP.NET Export to Excel and Word using VB.NET and C#

    In most ASP.NET web applications there is a need to export data. This is particularly useful if the information will be used for further analysis and archiving purposes offline. This tutorial will illustrate how you can export your data from your ASP.NET webpage example if it is coming from a MSSQL database to one of the most common file export formats in Windows MS Excel and MS Word.... DNS Configured Correctly? Test Your Internal DNS With Our Free DNS Advisor Tool From Infoblox.

    Read the article

  • Oracle ENDECA Discovery 3.1 Partner Training 3-Day Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 To find out more about the ENDECA training, and to Register for this, click here. June 24-26, 2014: Oracle Reading, UK – Free to partners in EMEA. FREE of charge to OPN member Partners, this Oracle Endeca Information Discovery (OEID) 3-day bootcamp is designed to give partners an understanding of OEID’s features, and how it complements the existing Oracle Business Intelligence suite. This workshop will provide hands-on experience with Oracle Endeca Information Discovery. Topics covered will include Data Exploration with Endeca Information Discovery, Data Ingest, Project Lifecycle, Building an Endeca Server data model and advanced modeling techniques, and Working with Studio. You will also learn about working with ETL components for content acquisitions and other aspects of the project such as security. After taking this course, you will be well prepared to architect, build, demo, and implement an end-to-end Endeca Information Discovery solution. If you are a Bigdata Analytics Architect or Developer, BI or Data Warehouse Architect, developer or consultant, you don’t want to miss this 3-day workshop. Click here to Register for this. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • want to build a replica of chartgame.com

    - by raj
    I want to develop a trading simulator based on technical analysis. my ideal application would exactly be chartgame.com currently chartgame.com doesnt have historical data for stocks beyond the year 2008 and I would like to have data until 2012 and have the capability to extend beyond if needed. what are the fundamentals to build an application like chartgame.com. If anyone here is willing to help I can arrange for the finances.let me know.

    Read the article

  • CommonFilter and CommonData solutions on CodePlex updated

    - by TATWORTH
    The CommonFilter and CommonData solutions on Codeplex have been updated post VS2010 SP1. The respective URLs are: http://commondata.codeplex.com/releases/view/62502 http://commonfilter.codeplex.com/releases/view/62499 CommonFilter is a cut-down version of CommonData containing just the filter functions. Common Data contains a vast number of useful functions for building ASP.NET web sites including: Lightweight reporting to a custome event log Filter functions for common types of data input

    Read the article

  • User input and Automated input seperation

    - by tpaksu
    I have a mysql db and I have an automation script which modifies the data inside once a day. And, these columns may have changed by an user manually. What is the best approach to make the system only update the automated data, not the manually edited ones? I mean yes, flagging the cell which is manually edited is one way to do it, but I want to know if there's another way to accomplish this? Just curiosity.

    Read the article

  • Working With Outlook from Access

    Last month we discussed how to get data from Microsoft Access into Outlook objects, such as in the creation of new appointments, tasks, contacts and emails. This month we perform the reverse operation: get data out of Outlook into Microsoft Access.

    Read the article

  • Free web "caching" services for a web service

    - by Jason Banico
    I have a web service on Google App engine whose data is updated on a daily basis. To minimize bandwidth utilization from mobile clients connecting to it, I'd like to instead have an intermediary site where the clients will be getting their data from, and minimizing hits to my service to once or twice a day only. Is there such a service I can use? I'd like to explore this "pull" option first, before considering "push" options such as publishing to a blog site or a free website host that doesn't have bandwidth caps.

    Read the article

  • Creating a Reporting Services Histogram Chart for Statistical Distribution Analysis

    Typically transactional data is quite detailed and analyzing an entire dataset on a graph is not feasible. Generally such data is analyzed using some form of aggregation or frequency distribution. One of the specialized charts generally used in Reporting Services for statistical distribution is Histogram Charts. In this tip we look at how Histogram Charts can be used for statistical distribution analysis and how to create and configure this type of chart in SSRS.

    Read the article

  • Storing a looong lookup table

    - by inquisitive
    Background The product i am working on has a very long lookup-table. the table contains static data and cannot be auto generated. there are about 500 rows and 10 columns. columns have mostly integers and strings. to complicate the matters, there are actually two such tables. every row in table-1 maps to zero-or-more rows in table-2. we use an SQLite database with two tables. the product installer places the SQLite file in the installation directory. the application is written in dot-net and we use ADO to load the data once on startup. now, the lookup table grows. in each release a month, we add about 10 new entries existing entries are adjusted. every release we fine tune existing entries. The problem a team of (10) developers work on the lookup table. Code goes in the SVN, but the little devil the SQLite does not. this prevents multiple developers to work on it. we do take regular backups of the file, but proper versioning is not possible. we never know who did the breaking change. the worse thing is we dont know if there is any change at all. diff'ing databases is tedious if not impossible. the tables are expected to grow quite large in years to come and we would need developers to work in parallel on it. the data is business critical. we need to be able to audit changes made to it. Question What would be a solution for the problems outlines above? one idea was to transform the whole thing to XML and treat it like just another source file. that way SVN can do the versioning and we can work in parallel. but the data shows relational behavior. with XML we loose the unique and foreign-key constraints. also we cant query it with sql like ease. any help here will be appreciated.

    Read the article

  • Finding privacy sensitive keywords

    - by user69914
    I have a list of about 80,000 unique words or short phrases. These words and short phrases are associated with other data. I'm trying to create a blacklist so I won't use any of the data associated with privacy sensitive words or short phrases. Example sensitive words or short phrases might be associated with sexual or illicit activity. I know that privacy and sensitivity are in the eye of the beholder, but I'm looking for any established list or solutions of this nature.

    Read the article

  • Free Webinar: Monitoring your business, not just your servers – Getting the most out of SQL Monitor

    Wednesday July 25 2012, 6:00pm BST: Learn how you can use SQL Monitor to gather information and alert on extra performance data for your servers and applications, making this tool vital for keeping an eye on your business. In this free webinar David Bick, Product Manager at Red Gate, will give you an overview of SQL Monitor including the new custom metric functionality in v3. Repeatable deployment without fear of data lossUse your version control system with the SSMS plug-in SQL Source Control and SQL Compare for accurate deployments without the worry. Find out more.

    Read the article

  • SSIS Virtual Class

    - by ejohnson2010
    I recorded a Virtual SSIS Class with the good folks over at SSWUG and the first airing of the class will by May 15th. This is 100% online so you can do it on your own time and from anywhere. The class will run monthly and I will be available for questions through out. You get the following 12 sessions on SSIS, each about an hour. Session 1: The SSIS Basics Session 2: Control Flow Basics Session 3: Data Flow - Sources and Destinations Session 4: Data Flow - Transformations Session 5: Advanced Transformations...(read more)

    Read the article

< Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >