Search Results

Search found 91480 results on 3660 pages for 'large data in sharepoint list'.

Page 289/3660 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • Oracle as a Data Source

    This article takes a quick look at Oracle database's materialized view and extends the concept of that feature to a case where Oracle is the data source for another relational database management system.

    Read the article

  • You Probably Already Have a “Private Cloud”

    - by BuckWoody
    I’ve mentioned before that I’m not a fan of the word “Cloud”. It’s too marketing-oriented, gimmicky and non-specific. A better definition (in many cases) is “Distributed Computing”. That means that some or all of the computing functions are handled somewhere other than under your specific control. But there is a current use of the word “Cloud” that does not necessarily mean that the computing is done somewhere else. In fact, it’s a vector of Cloud Computing that can better be termed “Utility Computing”. This has to do with the provisioning of a computing resource. That means the setup, configuration, management, balancing and so on that is needed so that a user – which might actually be a developer – can do some computing work. To that person, the resource is just “there” and works like they expect, like the phone system or any other utility. The interesting thing is, you can do this yourself. In fact, you probably already have been, or are now. It’s got a cool new trendy term – “Private Cloud”, but the fact is, if you have your setup automated, the HA and DR handled, balancing and performance tuning done, and a process wrapped around it all, you can call yourself a “Cloud Provider”. A good example here is your E-Mail system. your users – pretty much your whole company – just logs into e-mail and expects it to work. To them, you are the “Cloud” provider. On your side, the more you automate and provision the system, the more you act like a Cloud Provider. Another example is a database server. In this case, the “end user” is usually the development team, or perhaps your SharePoint group and so on. The data professionals configure, monitor, tune and balance the system all the time. The more this is automated, the more you’re acting like a Cloud Provider. Lots of companies help you do this in your own data centers, from VMWare to IBM and many others. Microsoft's offering in this is based around System Center – they have a “cloud in a box” provisioning system that’s actually pretty slick. The most difficult part of operating a Private Cloud is probably the scale factor. In the case of Windows and SQL Azure, we handle this in multiple ways – and we're happy to share how we do it. It’s not magic, and the algorithms for balancing (like the one we started with called Paxos) are well known. The key is the knowledge, infrastructure and people. Sure, you can do this yourself, and in many cases such as top-secret or private systems, you probably should. But there are times where you should evaluate using Azure or other vendors, or even multiple vendors to spread your risk. All of this should be based on client need, not on what you know how to do already. So congrats on your new role as a “Cloud Provider”. If you have an E-mail system or a database platform, you can just put that right on your resume.

    Read the article

  • BigQuery - UK dev community, JSON, nested/repeated, improved data loading - Live from London

    BigQuery - UK dev community, JSON, nested/repeated, improved data loading - Live from London Join Michael Manoochehri and Ryan Boyd live from London to discuss Strata London and Best Practices for using BigQuery. They'll also host an open Office Hours. Please add your questions to Google Moderator on developers.google.com From: GoogleDevelopers Views: 87 14 ratings Time: 33:00 More in Science & Technology

    Read the article

  • Turning Supply Data Into Savings- (Almost) Everything You Need to Know in 12 Minutes

    - by David Hope-Ross
    Strategic sourcing and supplier management analytics are easy. The hard part is getting reliable data that provide an accurate record of suppliers, spend, invoices, expenses, and so on. In this new AppsCast, e-Three’s Amy Joshi provides an extraordinarily cogent explanation of key challenges, technologies, and tactics for improving spend visibility. Take twelve minutes to listen and learn. The techniques that Amy outlines can add millions to your organization’s bottom line.

    Read the article

  • Using NBuilder to mock up a data driven UI - Part 1

    In this article we will take a look at a fairly new open source project called NBuilder (http://www.nbuilder.org and http://code.google.com/p/nbuilder/) and how it can be used to provide us with fake data out of the gate. NBuilder allows you to quickly stand up generated objects based on standard .net types in an easy fluent manner. And that is just the start!

    Read the article

  • Displaying performance data per engine subsystem

    - by liortal
    Our game (Android based) traces how long it takes to do the world logic updates, and how long it takes to a render a frame to the device screen. These traces are collected every frame, and displayed at a constant interval (currently every 1 second). I've seen games where on-screen data of various engine subsystems is displayed, with the time they consume (either in text) or as horizontal colored bars. I am wondering how to implement such a feature?

    Read the article

  • Object detection in bitmap JavaScript canvas

    - by fallenAngel
    I want to detect clicks on canvas elements which are drawn using paths. So far I have stored element paths in a JavaScript data structure and then check the coordinates of hits which match the element's coordinates. Rendering each element path and checking the hits would be inefficient when there are a lot of elements. I believe there must be an algorithm for this kind of coordinate search, can anyone help me with this?

    Read the article

  • GDD-BR 2010 [0H] OpenID-based single sign-on and OAuth data access

    GDD-BR 2010 [0H] OpenID-based single sign-on and OAuth data access Speaker: Ryan Boyd Track: Chrome and HTML5 Time slot: H[17:20 - 18:05] Room: 0 A discussion of all the auth tangles you've encountered so far -- OpenID, SSO, 2-Legged OAuth, 3-Legged OAuth, and Hybrid OAuth. We'll show you when and where to use them, and explain how they all integrate with Google APIs and other developer products. From: GoogleDevelopers Views: 11 0 ratings Time: 41:24 More in Science & Technology

    Read the article

  • WebCenter Customer Spotlight: Hitachi Data Systems

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter Watch this Webcast to see a live demo on how HDS creates multilingual content for their 35+ regional websites  Solution SummaryHitachi Data Systems (HDS) provides mid-range and high-end storage systems, software and services. It is a wholly owned subsidiary of Hitachi Ltd. HDS is based in Santa Clara, California, and has over 5,300 employees in more then 100 countries and regions. HDS's main objectives were to provide a consistent message across all their sites, to maintain a tight governance structure across their messages and related content, expand the use of the existing content management systems and implement a centralized translation management system. HDS implemented a global web content management system based on Oracle WebCenter Content and integrated the Lingotek translation management system to manage their multilingual content. The implemented solution provides each Geo with the ability to expand their web offering to meet local market needs, while staying aligned with the Corporate Web Guidelines Company OverviewHitachi Data Systems (HDS) provides mid-range and high-end storage systems, software and services. It is a wholly owned subsidiary of Hitachi Ltd. and part of the Hitachi Information Systems & Telecommunications Division. The company sells through direct and indirect channels in more than 170 countries and regions. Its customers include of 50 percent of the Fortune 100 companies. HDS is based in Santa Clara California and has over 5,300 employees in more than 100 countries and regions. Business ChallengesHDS has over 35 global websites and the lack of global web capabilities led to inconsistency of messaging, slower time to market and failed to address local language needs. There was an extensive operational overhead due to manual and redundant processes. Translation efforts where superficial, inconsistent and wasteful and the lack of translation automation tools discouraged localization.  HDS's main objectives were to provide a consistent message across all their sites, to maintain a tight governance structure across their messages and related content, expand the use of the existing content management systems and implement a centralized translation management system. Solution DeployedHDS implemented a global web content management system based on Oracle WebCenter Content. The solution supports decentralized publishing for their 35+ global sites to address local market needs while ensuring editorial and brand review trough embedded review processes. They integrated the Lingotek translation management system into Oracle WebCenter Content to manage their multilingual content. Business Results Provides each Geo with the ability to expand their web offering to meet local market needs, while staying aligned with the Corporate Web Guidelines Enables end-to-end content lifecycle management across multiple languages Leverage translation memory for reuse and consistency Reduce time to market with central repository of translated content Additional Information HDS Webcast Oracle WebCenter Content Lingotek website

    Read the article

  • Data searches that leverage existing indexes

    Recent installments of our SQL Server 2005 Express Edition series have been discussing its implementation of Full Text Indexing. This article focuses on data searches, which leverage existing indexes, taking into account such features as noise words and thesaurus files.

    Read the article

  • Business layer access to the data layer

    - by rerun
    I have a Business layer (BL) and a Data layer (DL). I have an object o with a child objects collection of Type C. I would like to provide a semantics like the following o.Children.Add("info"). In the BL I would like to have a static CLASS that all of business layer classes uses to get a reference to the current datalayer instance. Is there any issue with that or must I use the factory pattern to limit creation to A class in the BL that knows the DL instance.

    Read the article

  • Data Mining Introduction

    Many people that work for years with SQL Server never use the Data Mining. This article has the objective to introduce them to this magic and exciting new world. 24% of devs don’t use database source control – make sure you aren’t one of themVersion control is standard for application code, but databases haven’t caught up. So what steps can you take to put your SQL databases under version control? Why should you start doing it? Read more to find out…

    Read the article

  • Why do so few large websites run a Microsoft stack?

    - by realworldcoder
    Off the top of my head, I can think of a handful of large sites which utilize the Microsoft stack Microsoft.com Dell MySpace PlentyOfFish StackOverflow Hotmail, Bing, WindowsLive However, based on observation, nearly all of the top 500 sites seem to be running other platforms.What are the main reasons there's so little market penetration? Cost? Technology Limitations? Does Microsoft cater to corporate / intranet environments more then public websites? I'm not looking for market share, but rather large scale adoption of the MS stack.

    Read the article

  • Using the @ in SQL Azure Connections

    - by BuckWoody
    The other day I was working with a client on an application they were changing to a hybrid architecture – some data on-premise and other data in SQL Azure and Windows Azure Blob storage. I had them make a couple of corrections - the first was that all communications to SQL Azure need to be encrypted. It’s a simple addition to the connection string, depending on the library you use. Which brought up another interesting point. They had been using something that looked like this, using the .NET provider: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=LoginName;Password=myPassword; Trusted_Connection=False;Encrypt=True; This includes most of the formatting needed for SQL Azure. It specifies TCP as the transport mechanism, the database name is included, Trusted_Connection is off, and encryption is on. But it needed one more change: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=[LoginName]@[serverName];Password=myPassword; Trusted_Connection=False;Encrypt=True; Notice the difference? It’s the User ID parameter. It includes the @ symbol and the name of the server – not the whole DNS name, just the server name itself. The developers were a bit surprised, since it had been working with the first format that just used the user name. Why did both work, and why is one better than the other? It has to do with the connection library you use. For most libraries, the user name is enough. But for some libraries (subject to change so I don’t list them here) the server name parameter isn’t sent in the way the load balancer understands, so you need to include the server name right in the login, so the system can parse it correctly. Keep in mind, the string limit for that is 128 characters – so take the @ symbol and the server name into consideration for user names. The user connection info is detailed here: http://msdn.microsoft.com/en-us/library/ee336268.aspx Upshot? Include the @servername on your connection string just to be safe. And plan for that extra space…  

    Read the article

  • object detection in bitmmap javacanvas

    - by user1538127
    i want to detect clicks on canvas elements which are drawn using paths. so far i have think of to store elements path in javascript data structure and then check the cordinates of hits which matches the elements cordinates. i belive there is algorithm already for thins kind o cordinate search. rendering each of element path and checking the hits would be inefficient when elements number is larger. can anyone point on me that?

    Read the article

  • Resolving data redundancy up front

    - by okeofs
    Introduction As all of us do when confronted with a problem, the resource of choice is to ‘Google it’. This is where the plot thickens. Recently I was asked to stage data from numerous databases which were to be loaded into a data warehouse. To make a long story short, I was looking for a manner in which to obtain the table names from each database, to ascertain potential overlap.   As the source data comes from a SQL database created from dumps of a third party product,  one could say that there were +/- 95 tables for each database.   Yes I know that first instinct is to use the system stored procedure “exec sp_msforeachdb 'select "?" AS db, * from [?].sys.tables'”. However, if one stops to think about this, it would be nice to have all the results in a temporary or disc based  table; which in itself , implies additional labour. This said,  I decided to ‘re-invent’ the wheel. The full code sample may be found at the bottom of this article.   Define a few temporary tables and variables   declare @SQL varchar(max); declare @databasename varchar(75) /* drop table ##rawdata3 drop table #rawdata1 drop table #rawdata11 */ -- A temp table to hold the names of my databases CREATE TABLE #rawdata1 (    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) )     --A temp table with the same database names as above, HOWEVER using an --Identity number (recNO) as a loop variable. --You will note below that I loop through until I reach 25 (see below) as at --that point the system databases, the reporting server database etc begin. --1- 24 are user databases. These are really what I was looking for. --Whilst NOT the best solution,it works and the code was meant as a quick --and dirty. CREATE TABLE #rawdata11 (    recNo int identity(1,1),    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) )   --My output table showing the database name and table name CREATE TABLE ##rawdata3 (    database_name varchar(75) ,    table_name varchar(75), )   Insert the database names into a temporary table I pull the database names using the system stored procedure sp_databases   INSERT INTO #rawdata1 EXEC sp_databases Go   Insert the results from #rawdata1 into a table containing a record number  #rawdata11 so that I can LOOP through the extract   INSERT into #rawdata11 select * from  #rawdata1   We now declare 3 more variables:  @kounter is used to keep track of our position within the loop. @databasename is used to keep track of the’ current ‘ database name being used in the current pass of the loop;  as inorder to obtain the tables for that database we  need to issue a ‘USE’ statement, an insert command and other related code parts. This is the challenging part. @sql is a varchar(max) variable used to contain the ‘USE’ statement PLUS the’ insert ‘ code statements. We now initalize @kounter to 1 .   declare @kounter int; declare @databasename varchar(75); declare @sql varchar(max); set @kounter = 1   The Loop The astute reader will remember that the temporary table #rawdata11 contains our  database names  and each ‘database row’ has a record number (recNo). I am only interested in record numbers under 25. I now set the value of the temporary variable @DatabaseName (see below) .Note that I used the row number as a part of the predicate. Now, knowing the database name, I can create dynamic T-SQL to be executed using the sp_sqlexec stored procedure (see the code in red below). Finally, after all the tables for that given database have been placed in temporary table ##rawdata3, I increment the counter and continue on. Note that I used a global temporary table to ensure that the result set persists after the termination of the run. At some stage, I plan to redo this part of the code, as global temporary tables are not really an ideal solution.    WHILE (@kounter < 25)  BEGIN  select @DatabaseName = database_name from #rawdata11 where recNo = @kounter  set @SQL = 'Use ' + @DatabaseName + ' Insert into ##rawdata3 ' + + ' SELECT table_catalog,Table_name FROM information_schema.tables' exec sp_sqlexec  @Sql  SET @kounter  = @kounter + 1  END   The full code extract   Here is the full code sample.   declare @SQL varchar(max); declare @databasename varchar(75) /* drop table ##rawdata3 drop table #rawdata1 drop table #rawdata11 */ CREATE TABLE #rawdata1 (    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) ) CREATE TABLE #rawdata11 (    recNo int identity(1,1),    database_name varchar(50) ,    database_size varchar(50),    remarks Varchar(50) ) CREATE TABLE ##rawdata3 (    database_name varchar(75) ,    table_name varchar(75), )   INSERT INTO #rawdata1 EXEC sp_databases go INSERT into #rawdata11 select * from  #rawdata1 declare @kounter int; declare @databasename varchar(75); declare @sql varchar(max); set @kounter = 1 WHILE (@kounter < 25)  BEGIN  select @databasename = database_name from #rawdata11 where recNo = @kounter  set @SQL = 'Use ' + @DatabaseName + ' Insert into ##rawdata3 ' + + ' SELECT table_catalog,Table_name FROM information_schema.tables' exec sp_sqlexec  @Sql  SET @kounter  = @kounter + 1  END    select * from ##rawdata3  where table_name like '%SalesOrderHeader%'

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >