Search Results

Search found 5287 results on 212 pages for 'physical computing'.

Page 18/212 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Book Review - Programming Windows Azure by Siriram Krishnan

    - by BuckWoody
    As part of my professional development, I’ve created a list of books to read throughout the year, starting in June of 2011. This a review of the first one, called Programming Windows Azure by Siriram Krishnan. You can find my entire list of books I’m reading for my career here: http://blogs.msdn.com/b/buckwoody/archive/2011/06/07/head-in-the-clouds-eyes-on-the-books.aspx  Why I Chose This Book: As part of my learning style, I try to read multiple books about a single subject. I’ve found that at least 3 books are necessary to get the right amount of information to me. This is a “technical” work, meaning that it deals with technology and not business, writing or other facets of my career. I’ll have a mix of all of those as I read along. I chose this work in addition to others I’ve read since it covers everything from an introduction to more advanced topics in a single book. It also has some practical examples of actually working with the product, particularly on storage. Although it’s dated, many examples normally translate. I also saw that it had pretty good reviews. What I learned: I learned a great deal about storage, and many useful code snippets. I do think that there could have been more of a focus on the application fabric - but of course that wasn’t as mature a feature when this book was written. I learned some great architecture examples, and in one section I learned more about encryption. In that example, however, I would rather have seen the examples go the other way - the book focused on moving data from on-premise to Azure storage in an encrypted fashion. Using the Application Fabric I would rather see sensitive data left in a hybrid fashion on premise, and connect to for the Azure application. Even so, the examples were very useful. If you’re looking for a good “starter” Azure book, this is a good choice. I also recommend the last chapter as a quick read for a DBA, or Database Administrator. It’s not very long, but useful. Note that the limits described are incorrect - which is one of the dangers of reading a book about any cloud offering. The services offered are updated so quickly that the information is in constant danger of being “stale”. Even so, I found this a useful book, which I believe will help me work with Azure better. Raw Notes: I take notes as I read, calling that process “reading with a pencil”. I find that when I do that I pay attention better, and record some things that I need to know later. I’ll take these notes, categorize them into a OneNote notebook that I synchronize in my Live.com account, and that way I can search them from anywhere. I can even read them on the web, since the Live.com has a OneNote program built in. Note that these are the raw notes, so they might not make a lot of sense out of context - I include them here so you can watch my though process. Programming Windows Azure by Siriram Krishnan: Learning about how to select applications suitable for Distributed Technology. Application Fabric gets the least attention; probably because it was newer at the time. Very clear (Chapter One) Good foundation Background and history, but not too much I normally arrange my descriptions differently, starting with the use-cases and moving to physicality, but this difference helps me. Interesting that I am reading this using Safari Books Online, which uses many of these concepts. Taught me some new aspects of a Hypervisor – very low-level information about the Azure Fabric (not to be confused with the Application Fabric feature) (Chapter Two) Good detail of what is included in the SDK. Even more is available now. CS = Cloud Service (Chapter 3) Place Storage info in the configuration file, since it can be streamed in-line with a running app. Ditto for logging, and keep separated configs for staging and testing. Easy-switch in and switch out.  (Chapter 4) There are two Runtime API’s, one of external and one for internal. Realizing how powerful this paradigm really is. Some places seem light, and to drop off but perhaps that’s best. Managing API is not charged, which is nice. I don’t often think about the price, until it comes to an actual deployment (Chapter 5) Csmanage is something I want to dig into deeper. API requires package moves to Blob storage first, so it needs a URL. Csmanage equivalent can be written in Unix scripting using openssl. Upgrades are possible, and you use the upgradeDomainCount attribute in the Service-Definition.csdef file  Always use a low-privileged account to test on the dev fabric, since Windows Azure runs in partial trust. Full trust is available, but can be dangerous and must be well-thought out. (Chapter 6) Learned how to run full CMD commands in a web window – not that you would ever do that, but it was an interesting view into those links. This leads to a discussion on hosting other runtimes (such as Java or PHP) in Windows Azure. I got an expanded view on this process, although this is where the book shows its age a little. Books can be a problem for Cloud Computing for this reason – things just change too quickly. Windows Azure storage is not eventually consistent – it is instantly consistent with multi-phase commit. Plumbing for this is internal, not required to code that. (Chapter 7) REST API makes the service interoperable, hybrid, and consistent across code architectures. Nicely done. Use affinity groups to keep data and code together. Side note: e-book readers need a common “notes” feature. There’s a decent quick description of REST in this chapter. Learned about CloudDrive code – PowerShell sample that mounts Blob storage as a local provider. Works against Dev fabric by default, can be switched to Account. Good treatment in the storage chapters on the differences between using Dev storage and Azure storage. These can be mitigated. No, blobs are not of any size or number. Not a good statement (Chapter 8) Blob storage is probably Azure’s closest play to Infrastructure as a Service (Iaas). Blob change operations must be authenticated, even when public. Chapters on storage are pretty in-depth. Queue Messages are base-64 encoded (Chapter 9) The visibility timeout ensures processing of message in a disconnected system. Order is not guaranteed for a message, so if you need that set an increasing number in the queue mechanism. While Queues are accessible via REST, they are not public and are secured by default. Interesting – the header for a queue request includes an estimated count. This can be useful to create more worker roles in a dynamic system. Each Entity (row) in the Azure Table service is atomic – all or nothing. (Chapter 10) An entity can have up to 255 Properties  Use “ID” for the class to indicate the key value, or use the [DataServiceKey] Attribute.  LINQ makes working with the Azure Table Service much easier, although Interop is certainly possible. Good description on the process of selecting the Partition and Row Key.  When checking for continuation tokens for pagination, include logic that falls out of the check in case you are at the last page.  On deleting a storage object, it is instantly unavailable, however a background process is dispatched to perform the physical deletion. So if you want to re-create a storage object with the same name, add retry logic into the code. Interesting approach to deleting an index entity without having to read it first – create a local entity with the same keys and apply it to the Azure system regardless of change-state.  Although the “Indexes” description is a little vague, it’s interesting to see a Folding and Stemming discussion a-la the Porter Stemming Algorithm. (Chapter 11)  Presents a better discussion of indexes (at least inverted indexes) later in the chapter. Great treatment for DBA’s in Chapter 11. We need to work on getting secondary indexes in Table storage. There is a limited form of transactions called “Entity Group Transactions” that, although they have conditions, makes a transactional system more possible. Concurrency also becomes an issue, but is handled well if you’re using Data Services in .NET. It watches the Etag and allows you to take action appropriately. I do not recommend using Azure as a location for secure backups. In fact, I would rather have seen the examples in (Chapter 12) go the other way, showing how data could be brought back to a local store as a DR or HA strategy. Good information on cryptography and so on even so. Chapter seems out of place, and should be combined with the Blob chapter.  (Chapter 13) on SQL Azure is dated, although the base concepts are OK.  Nice example of simple ADO.NET access to a SQL Azure (or any SQL Server Really) database.  

    Read the article

  • Algorithm for computing the inverse of a polynomial

    - by Neville
    I'm looking for an algorithm (or code) to help me compute the inverse a polynomial, I need it for implementing NTRUEncrypt. An algorithm that is easily understandable is what I prefer, there are pseudo-codes for doing this, but they are confusing and difficult to implement, furthermore I can not really understand the procedure from pseudo-code alone. Any algorithms for computing the inverse of a polynomial with respect to a ring of truncated polynomials?

    Read the article

  • computing hash values, integral types versus struct/class

    - by aaa
    hello I would like to know if there is a difference in speed between computing hash value (for example std::map key) of primitive integral type, such as int64_t and pod type, for example struct { int16_t v[4]; };. I know this is going to implementation specific, so my question ultimately pertains to gnu standard library. Thanks

    Read the article

  • Delphi - Using DeviceIoControl passing IOCTL_DISK_GET_LENGTH_INFO to get flash media physical size (Not Partition)

    - by SuicideClutchX2
    Alright this is the result of a couple of other questions. It appears I was doing something wrong with the suggestions and at this point have come up with an error when using the suggested API to get the media size. Those new to my problem I am working at the physical disk level, not within the confines of a partition or file system. Here is the pastebin code for the main unit (Delphi 2009) - http://clutchx2.pastebin.com/iMnq8kSx Here is the application source and executable with a form built to output the status of whats going on - http://www.mediafire.com/?js8e6ci8zrjq0de Its probably easier to use the download, unless your just looking for problems within the code. I will also paste the code here. unit Main; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls; type TfrmMain = class(TForm) edtDrive: TEdit; lblDrive: TLabel; btnMethod1: TButton; btnMethod2: TButton; lblSpace: TLabel; edtSpace: TEdit; lblFail: TLabel; edtFail: TEdit; lblError: TLabel; edtError: TEdit; procedure btnMethod1Click(Sender: TObject); private { Private declarations } public { Public declarations } end; TDiskExtent = record DiskNumber: Cardinal; StartingOffset: Int64; ExtentLength: Int64; end; DISK_EXTENT = TDiskExtent; PDiskExtent = ^TDiskExtent; TVolumeDiskExtents = record NumberOfDiskExtents: Cardinal; Extents: array[0..0] of TDiskExtent; end; VOLUME_DISK_EXTENTS = TVolumeDiskExtents; PVolumeDiskExtents = ^TVolumeDiskExtents; var frmMain: TfrmMain; const FILE_DEVICE_DISK = $00000007; METHOD_BUFFERED = 0; FILE_ANY_ACCESS = 0; IOCTL_DISK_BASE = FILE_DEVICE_DISK; IOCTL_VOLUME_BASE = DWORD('V'); IOCTL_DISK_GET_LENGTH_INFO = $80070017; IOCTL_VOLUME_GET_VOLUME_DISK_EXTENTS = ((IOCTL_VOLUME_BASE shl 16) or (FILE_ANY_ACCESS shl 14) or (0 shl 2) or METHOD_BUFFERED); implementation {$R *.dfm} function GetLD(Drive: Char): Cardinal; var Buffer : String; begin Buffer := Format('\\.\%s:',[Drive]); Result := CreateFile(PChar(Buffer),GENERIC_READ Or GENERIC_WRITE,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); If Result = INVALID_HANDLE_VALUE Then begin Result := CreateFile(PChar(Buffer),GENERIC_READ,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); end; end; function GetPD(Drive: Byte): Cardinal; var Buffer : String; begin If Drive = 0 Then begin Result := INVALID_HANDLE_VALUE; Exit; end; Buffer := Format('\\.\PHYSICALDRIVE%d',[Drive]); Result := CreateFile(PChar(Buffer),GENERIC_READ Or GENERIC_WRITE,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); If Result = INVALID_HANDLE_VALUE Then begin Result := CreateFile(PChar(Buffer),GENERIC_READ,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); end; end; function GetPhysicalDiskNumber(Drive: Char): Byte; var LD : DWORD; DiskExtents : PVolumeDiskExtents; DiskExtent : TDiskExtent; BytesReturned : Cardinal; begin Result := 0; LD := GetLD(Drive); If LD = INVALID_HANDLE_VALUE Then Exit; Try DiskExtents := AllocMem(Max_Path); DeviceIOControl(LD,IOCTL_VOLUME_GET_VOLUME_DISK_EXTENTS,nil,0,DiskExtents,Max_Path,BytesReturned,nil); If DiskExtents^.NumberOfDiskExtents > 0 Then begin DiskExtent := DiskExtents^.Extents[0]; Result := DiskExtent.DiskNumber; end; Finally CloseHandle(LD); end; end; procedure TfrmMain.btnMethod1Click(Sender: TObject); var PD : DWORD; CardSize: Int64; BytesReturned: DWORD; CallSuccess: Boolean; begin PD := GetPD(GetPhysicalDiskNumber(edtDrive.Text[1])); If PD = INVALID_HANDLE_VALUE Then Begin ShowMessage('Invalid Physical Disk Handle'); Exit; End; CallSuccess := DeviceIoControl(PD, IOCTL_DISK_GET_LENGTH_INFO, nil, 0, @CardSize, SizeOf(CardSize), BytesReturned, nil); if not CallSuccess then begin edtError.Text := IntToStr(GetLastError()); edtFail.Text := 'True'; end else edtFail.Text := 'False'; CloseHandle(PD); end; end. I placed a second method button on the form so I can write a different set of code into the app if I feel like it. Only minimal error handling and safeguards are there is nothing that wasn't necessary for debugging this via source. I tried this on a Sony Memory Stick using a PSP as the reader because I cant find the adapter for using a duo in my machine. The target is an MS and half of my users use a PSP for a reader half dont. However this should work fine on SD cards and that is a secondary target for my work as well. I tried this on a usb memory card reader and several SD cards. Now that I have fixed my attempt I get an error returned. 50 ERROR_NOT_SUPPORTED The request is not supported. I have found an application that uses this API as well as alot of related functions for what I am trying todo. I am getting ready to look into it the application is called DriveImage and its source is here - http://sourceforge.net/projects/diskimage/ The only thing I have really noticed from that application is there use of TFileStream and using that to get a handle on the physical disk.

    Read the article

  • What does "Computing additional info" mean?

    - by JesperE
    Eclipse Helios periodically starts running a job which displays "Computing additional info". During this time, Eclipse is very sluggish, bordering on unusable. What does this job do? Can I shut it off? I just hope that someone in the JDT team comes to sense and gets rid of it, make it go faster, or at the very least change it to something meaningful.

    Read the article

  • R: optimal way of computing the "product" of two vectors

    - by Musa
    Hi, Let's assume that I have a vector r <- rnorm(4) and a matrix W of dimension 20000*200 for example: W <- matrix(rnorm(20000*200),20000,200) I want to compute a new matrix M of dimension 5000*200 such that m11 <- r%*%W[1:4,1], m21 <- r%*%W[5:8,1], m12 <- r%*%W[1:4,2] etc. (i.e. grouping rows 4-by-4 and computing the product). What's the optimal (speed,memory) way of doing this? Thanks in advance.

    Read the article

  • Cloud security and privacy

    - by Rakesh K
    Hi, I have a very basic doubt regarding cloud computing that is catching up pretty fast these days. To my understanding, cloud computing is a paradigm in which companies put up their data and applications on somebody else's machines aka 'The Cloud'. I want to know just how secure is it to put up my data on some third party machines, especially if my data contains private details. In particular, how can an enterprise trust the cloud computing service providers in this data privacy aspect? Thanks, rakesh.

    Read the article

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • How to OpenWebConfiguration with physical path?

    - by aron
    hi, I have a win form that creates a site in IIS7. One function needs to open the web.config file and make a few updates. (connection string, smtp, impersonation) However I do not have the virtual path, just the physical path. Is there any way I can still use WebConfigurationManager? I need to use it's ability to find section and read/write. System.Web.Configuration.WebConfigurationManager.OpenWebConfiguration Thanks!!

    Read the article

  • Physical Address to GeoLocation UK

    - by phpguy
    Is there a good physical address to GeoLocation conversion database in the UK? I am trying to use this to build a globrix style search box http://www.globrix.com/ for a web application. Any pointers will be nice. I have been searching for hours. I have found several that convert UK Postcodes into Geolocation. But I need the addresses listed as on Globrix.

    Read the article

  • Restoring MySQL database from physical files

    - by Abdullah Jibaly
    Is it possible to restore a MySQL database from the physical database files. I have a directory that has the following file types: client.frm client.MYD client.MYI but for about 20 more tables. I usually use mysqldump or a similar tool to get everything in 1 SQL file so what is the way to deal with these types of files?

    Read the article

  • How best can I extract a logical model from a physical DB model

    - by Dean
    We have made substantial changes to our physical DB, now as it is the ne dof the project I would like to abstract a logical model from this, to allow me to generate schemas for both Oracle and SQL Server. Can anyone guide me as to the best way to achieve this. I was hoping TOAD data modeller would help but I can't seem to see any options to do what I require?

    Read the article

  • Utilizing 5 physical servers in 1 cluster

    - by Vijay Gharge
    Hi all, I have 5 physical servers with low end memory & cpu resources. I want to create 1 cluster using all these servers and want to run mysql db on the same such that mysql db would utilize 5 server's CPU power to execute db queries & same for memory. Could you please help me understanding how to achieve this? Regards,

    Read the article

  • Le cloud computing pourrait faire gagner 763 milliards d'euros à l'Europe, et générer la création de 2.4 millions d'emplois

    Le cloud computing pourrait faire gagner 763 milliards d'euros à l'Europe, et générer la création de 2.4 millions d'emplois Nos voisins d'outre-manche viennent de publier une étude très intéressante. Elle révèle ainsi qu'une adoption majeure du cloud computing en Europe pourrait avoir des conséquences extrêmement positives pour l'économie. Une utilisation de masse de cette technologie permettrait aux pays de l'Union d'économiser 763 milliards d'euros sur cinq ans ! En effet, cela éviterait aux entreprises d'avoir à créer toutes leurs infrastructures IT. A la place, elles auraient juste à louer divers services et stockages. Le cloud computing permet aussi de réaliser des économies d'énergie, et si son prix parait enc...

    Read the article

  • SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014

    - by Pinal Dave
    SQL Server 2014 has new cardinality estimation logic/algorithm. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload. Cardinality estimates are a prediction of the number of rows in the query result. The query optimizer uses these estimates to choose a plan for executing the query. The quality of the query plan has a direct impact on improving query performance. ~ Souce MSDN Let us see a quick example of how cardinality improves performance for a query. I will be using the AdventureWorks database for my example. Before we start with this demonstration, remember that even though you have SQL Server 2014 to see the effect of new cardinality estimates, you will need your database compatibility mode set to 120 which is for SQL Server 2014. If your server instance of SQL Server 2014 but you have set up your database compatibility mode to 110 or any other earlier version, you will get performance from your query like older version of SQL Server. Now we will execute following query in two different compatibility mode and see its performance. (Note that my SQL Server instance is of version 2014). USE AdventureWorks2014 GO -- ------------------------------- -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO -- ------------------------------- -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO Result of Statistics IO Compatibility level 120 Table ‘Person’. Scan count 0, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Compatibility level 110 Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Person’. Scan count 0, logical reads 137, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. You will notice in the case of compatibility level 110 there 137 logical read from table person where as in the case of compatibility level 120 there are only 6 physical reads from table person. This drastically improves the performance of the query. If we enable execution plan, we can see the same as well. I hope you will find this quick example helpful. You can read more about this in my latest Pluralsight Course. Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • computing matting Laplacian matrix of an Image

    - by ajith
    hi everyone, i need to compute Laplacian Matrix-L for an image(nXn) in opencv...computing goes as follows.......... dij - 1/|wk|{[1+1/(e/|wk|+s2)][(Ii-µk)*(Ij -µk)]}.... for all(i,j)?wk,summing over k yields (i,j)th element of L. where Here dij is the Kronecker delta,µk and s2k are the mean & variance of intensities in the window wk around k,and |wk| is the number of pixels in this window.wk is 3X3 window... here am not clear about 2 things... 1.what ll be the size of L?nXn or (nXn)X(nXn)?? 2.how to select Ii and Ij separately in 2D image?

    Read the article

  • learning the Lower levels of computing

    - by Ben
    I am a software developer with four years experience in .Net development, I always like to keep up to date with the latest technologies (.net related normally) being released and love learning them. I didn't however go to university and learnt all I know through helpful colleagues, .Net courses, the internet and good old books. I feel that I am a good developer, but without learning the lower levels of a computer as you would in the first year of a computer related Uni course, I get lost when talking to people about a lot of more technical lower level computing. Is there a book(s) that anyone could recommend, that would cover the lower levels of what is going on when I click "Run" in Visual Studio? I feel out of my depth when my boss says to me "Thats running in the CPU cache" or "you're limited by disk reads there", and would like to feel more confident when talking about how the hardware talks to each other (CPU to RAM etc). Apologise if thats a vague question, or has been asked before (i did check and couldn't find anything on here that answers my question).

    Read the article

  • Pohlig–Hellman algorithm for computing discrete logarithms

    - by drelihan
    Hi Folks, I'm working on coding the Pohlig-Hellman Algorithm but I am having problem understand the steps in the algorithm based on the definition of the algorithm. Going by the Wiki of the algorithm: http://en.wikipedia.org/wiki/Pohlig%E2%80%93Hellman_algorithm I know the first part 1) is to calculate the prime factor of p-1 - which is fine. Howeever, I am not sure what I need to do in steps 2) and 3). Can someone help with explaining this in plain english (i) - or pseudocode. I want to code the solution myself obviously but I cannot make any more progress unless i understand the algorithm. Note: I have done a lot of searching for this and I read S. Pohlig and M. Hellman (1978). "An Improved Algorithm for Computing Logarithms over GF(p) and its Cryptographic Significance but its still not really making sense to me. Thanks in advance

    Read the article

  • GPU Computing - # of GPUs supported

    - by TehTypoKing
    I currently have a desktop with 6 GPUs ( 3x HD 5970s ) in non-crossfire mode. Unfortunately, it seems that Windows 7 64bit only supports up to 4 GPUs. I have not been able to find a reliable source to deny or confirm this. If windows 7 has this limitation, is there a Linux flavor that supports more than 4 GPUs? In-case you are wondering, this is not for gaming but high-speed single precision computing. With this current setup ( if I can find 6gpu support ) I am looking to reach 13.8 Teraflops. Also, my motherboard does support 3 16x pci-xpress gen2 slots... and I have a 1500w powersupply plugged into a 20amp outlet. Windows is able to detect all 6 cores.. although, 2 of which displays the warning "Drivers failed to load". To recap: - Can windows support 6 GPUs? - If not, does Linux? Thank you.

    Read the article

  • What grid distributed computing frameworks are currently favoured for trading systems

    - by Rich
    There seems to a quite a few grid computing frameworks out there, but which ones are actually being used to any great extent by the investment banks for purposes of low latency distributing calculation? I'd be interested to hear answers covering both windows,Linux and cross platform. Also, what RPC mechanisms seem to be favoured most? I've heard that for reason of low latency and speed, the calculations themselves are quite often written in C++/C as calculations running on VMs are several orders of magnitude slower than native code. Does this seem to be a common scenario in practice? e.g distributed .NET grid framework running calculations written in native c++/c?

    Read the article

  • Are there any disadvantages of having a "free fall sensor" on a hard disk drive?

    - by therobyouknow
    This is a general question that came out of a specific comparison between the Western Digital Scorpio WD3200BEKT and Western Digital Scorpio WD3200BJKT (which is the same as the former but with a free fall sensor.) Note: I'm not asking for a review or appraisal of these specific drives, as the general question does apply on other brands as well. Though your input would help my decision. To break down the general question in order to answer it, I would be looking for comments on things like: if it's necessary to have differing physical dimensions between free fall sensor drives and those without, e.g. does it make it any thicker, and therefore reduce the systems where it can be installed - particularly smaller laptops? does it actually make the system less reliable - because of false alarms whereby the drive thought the laptop was falling but it wasn't? I suppose that the fact that a manufacturer produces both drives with and without free fall sensors says something about possible disadvantages. Or it could be standard marketing techniques where by making drives with and without results in larger sales volume than just those with the feature alone.

    Read the article

  • Create PDF in memory instead of physical file

    - by acadia
    How do one create PDF in memorystream instead of physical file using itextsharp. The code below is creating actual pdf file. Instead how can I create a byte[] and store it in the byte[] so that I can return it through a function using iTextSharp.text; using iTextSharp.text.pdf; Document doc = new Document(iTextSharp.text.PageSize.LETTER, 10, 10, 42, 35); PdfWriter wri = PdfWriter.GetInstance(doc, new FileStream("c:\\Test11.pdf", FileMode.Create)); doc.Open();//Open Document to write Paragraph paragraph = new Paragraph("This is my first line using Paragraph."); Phrase pharse = new Phrase("This is my second line using Pharse."); Chunk chunk = new Chunk(" This is my third line using Chunk."); doc.Add(paragraph); doc.Add(pharse); doc.Add(chunk); doc.Close(); //Close document

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >