Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 780/3203 | < Previous Page | 776 777 778 779 780 781 782 783 784 785 786 787  | Next Page >

  • SQLAuthority News – Presenting at Tech-Ed On Road – Ahmedabad – June 11, 2011 – Wait Types and Queues

    - by pinaldave
    I will be presenting in person on the subject SQL Server Wait Types and Queues at Ahmedabad on June 11, 2011. Here is the quick summary of the session. SQL Server Waits and Queues – Your Gateway to Perf. Troubleshooting Time: 11:15am – 12:15pm – June 11, 2011 Just like a horoscope, SQL Server Waits and Queues can reveal your past, explain your present and predict your future. SQL Server Performance Tuning uses the Waits and Queues as a proven method to identify the best opportunities to improve performance. A glance at Wait Types can tell where there is a bottleneck. Learn how to identify bottlenecks and potential resolutions in this fast paced, advanced performance tuning session. This session is based on my performance tuning Wait Types and Queues series. SQL SERVER – Summary of Month – Wait Type – Day 28 of 28 During the session there will be Quiz and those who gets right answer will get very interesting gifts from me. Do not miss a single minute of the event. We are also going to have two rock star speakers – Harish Vaidyanathan and Jacob Sebastian. Here is the details for the event: SQLAuthority News – Community Tech Days – TechEd on The Road – Ahmedabad – June 11, 2011 Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology

    Read the article

  • SQL SERVER – Using MAXDOP 1 for Single Processor Query – SQL in Sixty Seconds #008 – Video

    - by pinaldave
    Today’s SQL in Sixty Seconds video is inspired from my presentation at TechEd India 2012 on Speed up! – Parallel Processes and Unparalleled Performance. There are always special cases when it is about SQL Server. There are always few queries which gives optimal performance when they are executed on single processor and there are always queries which gives optimal performance when they are executed on multiple processors. I will be presenting the how to identify such queries as well what are the best practices related to the same. In this quick video I am going to demonstrate if the query is giving optimal performance when running on single CPU how one can restrict queries to single CPU by using hint OPTION (MAXDOP 1). More on Errors: Difference Temp Table and Table Variable – Effect of Transaction Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT Debate – Table Variables vs Temporary Tables – Quiz – Puzzle – 13 of 31 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Video

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 6

    - by Max
    In this post, we are going to look into implementing lists into our twitter application and also about enhancing the data grid to display the status messages in a pleasing way with the profile images. Twitter lists are really cool feature that they recently added, I love them and I’ve quite a few lists setup one for DOTNET gurus, SQL Server gurus and one for a few celebrities. You can follow them here. Now let us move onto our tutorial. 1) Lists can be subscribed to in two ways, one can be user’s own lists, which he has created and another one is the lists that the user is following. Like for example, I’ve created 3 lists myself and I am following 1 other lists created by another user. Both of them cannot be fetched in the same api call, its a two step process. 2) In the TwitterCredentialsSubmit method we’ve in Home.xaml.cs, let us do the first api call to get the lists that the user has created. For this the call has to be made to https://twitter.com/<TwitterUsername>/lists.xml. The API reference is available here. myService1.AllowReadStreamBuffering = true; myService1.UseDefaultCredentials = false; myService1.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService1.DownloadStringCompleted += new DownloadStringCompletedEventHandler(ListsRequestCompleted); myService1.DownloadStringAsync(new Uri("https://twitter.com/" + GlobalVariable.getUserName() + "/lists.xml")); 3) Now let us look at implementing the event handler – ListRequestCompleted for this. public void ListsRequestCompleted(object sender, System.Net.DownloadStringCompletedEventArgs e) { if (e.Error != null) { StatusMessage.Text = "This application must be installed first."; parseXML(""); } else { //MessageBox.Show(e.Result.ToString()); parseXMLLists(e.Result.ToString()); } } 4) Now let us look at the parseXMLLists in detail xdoc = XDocument.Parse(text); var answer = (from status in xdoc.Descendants("list") select status.Element("name").Value); foreach (var p in answer) { Border bord = new Border(); bord.CornerRadius = new CornerRadius(10, 10, 10, 10); Button b = new Button(); b.MinWidth = 70; b.Background = new SolidColorBrush(Colors.Black); b.Foreground = new SolidColorBrush(Colors.Black); //b.Width = 70; b.Height = 25; b.Click += new RoutedEventHandler(b_Click); b.Content = p.ToString(); bord.Child = b; TwitterListStack.Children.Add(bord); } So here what am I doing, I am just dynamically creating a button for each of the lists and put them within a StackPanel and for each of these buttons, I am creating a event handler b_Click which will be fired on button click. We will look into this method in detail soon. For now let us get the buttons displayed. 5) Now the user might have some lists to which he has subscribed to. We need to create a button for these as well. In the end of TwitterCredentialsSubmit method, we need to make a call to http://api.twitter.com/1/<TwitterUsername>/lists/subscriptions.xml. Reference is available here. The code will look like this below. myService2.AllowReadStreamBuffering = true; myService2.UseDefaultCredentials = false; myService2.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService2.DownloadStringCompleted += new DownloadStringCompletedEventHandler(ListsSubsRequestCompleted); myService2.DownloadStringAsync(new Uri("http://api.twitter.com/1/" + GlobalVariable.getUserName() + "/lists/subscriptions.xml")); 6) In the event handler – ListsSubsRequestCompleted, we need to parse through the xml string and create a button for each of the lists subscribed, let us see how. I’ve taken only the “full_name”, you can choose what you want, refer the documentation here. Note the point that the full_name will have @<UserName>/<ListName> format – this will be useful very soon. xdoc = XDocument.Parse(text); var answer = (from status in xdoc.Descendants("list") select status.Element("full_name").Value); foreach (var p in answer) { Border bord = new Border(); bord.CornerRadius = new CornerRadius(10, 10, 10, 10); Button b = new Button(); b.Background = new SolidColorBrush(Colors.Black); b.Foreground = new SolidColorBrush(Colors.Black); //b.Width = 70; b.MinWidth = 70; b.Height = 25; b.Click += new RoutedEventHandler(b_Click); b.Content = p.ToString(); bord.Child = b; TwitterListStack.Children.Add(bord); } Please note, I am setting the button width to be auto based on the content and also giving it a midwidth value. I wanted to create a rounded corner buttons, but for some reason its not working. Also add this StackPanel – TwitterListStack of the Home.xaml <StackPanel HorizontalAlignment="Center" Orientation="Horizontal" Name="TwitterListStack"></StackPanel> After doing this, you would get a series of buttons in the top of the home page. 7) Now the button click event handler – b_Click, in this method, once the button is clicked, I call another method with the content string of the button which is clicked as the parameter. Button b = (Button)e.OriginalSource; getListStatuses(b.Content.ToString()); 8) Now the getListsStatuses method: toggleProgressBar(true); WebRequest.RegisterPrefix("http://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted); if (listName.IndexOf("@") > -1 && listName.IndexOf("/") > -1) { string[] arrays = null; arrays = listName.Split('/'); arrays[0] = arrays[0].Replace("@", " ").Trim(); //MessageBox.Show(arrays[0]); //MessageBox.Show(arrays[1]); string url = "http://api.twitter.com/1/" + arrays[0] + "/lists/" + arrays[1] + "/statuses.xml"; //MessageBox.Show(url); myService.DownloadStringAsync(new Uri(url)); } else myService.DownloadStringAsync(new Uri("http://api.twitter.com/1/" + GlobalVariable.getUserName() + "/lists/" + listName + "/statuses.xml")); Please note that the url to look at will be different based on the list clicked – if its user created, the url format will be http://api.twitter.com/1/<CurentUser>/lists/<ListName>/statuses.xml But if it is some lists subscribed, it will be http://api.twitter.com/1/<ListOwnerUserName>/lists/<ListName>/statuses.xml The first one is pretty straight forward to implement, but if its a list subscribed, we need to split the listName string to get the list owner and list name and user them to form the string. So that is what I’ve done in this method, if the listName has got “@” and “/” I build the url differently. 9) Until now, we’ve been using only a few nodes of the status message xml string, now we will look to fetch a new field - “profile_image_url”. Images in datagrid – COOL. So for that, we need to modify our Status.cs file to include two more fields one string another BitmapImage with get and set. public string profile_image_url { get; set; } public BitmapImage profileImage { get; set; } 10) Now let us change the generic parseXML method which is used for binding to the datagrid. public void parseXML(string text) { XDocument xdoc; xdoc = XDocument.Parse(text); statusList = new List<Status>(); statusList = (from status in xdoc.Descendants("status") select new Status { ID = status.Element("id").Value, Text = status.Element("text").Value, Source = status.Element("source").Value, UserID = status.Element("user").Element("id").Value, UserName = status.Element("user").Element("screen_name").Value, profile_image_url = status.Element("user").Element("profile_image_url").Value, profileImage = new BitmapImage(new Uri(status.Element("user").Element("profile_image_url").Value)) }).ToList(); DataGridStatus.ItemsSource = statusList; StatusMessage.Text = "Datagrid refreshed."; toggleProgressBar(false); } We are here creating a new bitmap image from the image url and creating a new Status object for every status and binding them to the data grid. Refer to the Twitter API documentation here. You can choose any column you want. 11) Until now, we’ve been using the auto generate columns for the data grid, but if you want it to be really cool, you need to define the columns with templates, etc… <data:DataGrid AutoGenerateColumns="False" Name="DataGridStatus" Height="Auto" MinWidth="400"> <data:DataGrid.Columns> <data:DataGridTemplateColumn Width="50" Header=""> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Image Source="{Binding profileImage}" Width="50" Height="50" Margin="1"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> <data:DataGridTextColumn Width="Auto" Header="User Name" Binding="{Binding UserName}" /> <data:DataGridTemplateColumn MinWidth="300" Width="Auto" Header="Status"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock TextWrapping="Wrap" Text="{Binding Text}"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> I’ve used only three columns – Profile image, Username, Status text. Now our Datagrid will look super cool like this. Coincidentally,  Tim Heuer is on the screenshot , who is a Silverlight Guru and works on SL team in Microsoft. His blog is really super. Here is the zipped file for all the xaml, xaml.cs & class files pages. Ok let us stop here for now, will look into implementing few more features in the next few posts and then I am going to look into developing a ASP.NET MVC 2 application. Hope you all liked this post. If you have any queries / suggestions feel free to comment below or contact me. Cheers! Technorati Tags: Silverlight,LINQ,Twitter API,Twitter,Silverlight 4

    Read the article

  • Having trouble with projection matrix, need help

    - by Mr.UNOwen
    I'm having trouble with what appears to be the projection matrix. Given a wide enough of a screen, when a cube is on the left and right most edge, the left or right wall will appear stretched to the point that the front face is 1/10 the width of the side. So I do update the screen ratio along with the projection matrix and view port on screen resize, am I safe to assume all the trouble is from the matrix class? Also the cube follows the mouse, but it's only vertically aligned and ahead of the mouse when going left or right from the center of the screen. Perspective function call: * setPerspective * * @param fov: angle in radians * @param aspect: screen ratio w/h * @param near: near distance * @param far: far distance **/ void APCamera::setPerspective(GMFloat_t fov, GMFloat_t aspect, GMFloat_t near, GMFloat_t far) { GMFloat_t difZ = near - far; GMFloat_t *data; mProjection->clear(); //set to identity matrix data = mProjection->getData(); GMFloat_t v = 1.0f / tan(fov / 2.0f); data[_AP_MAA] = v / aspect; data[_AP_MBB] = v; data[_AP_MCC] = (far + near) / difZ; data[_AP_MCD] = -1.0f; data[_AP_MDD] = 0.0f; data[_AP_MDC] = 2.0f * far * near/ difZ; mRatio = aspect; mInvProjOutdated = true; mIsPerspective = true; } and... #define _AP_MAA 0 #define _AP_MAB 1 #define _AP_MAC 2 #define _AP_MAD 3 #define _AP_MBA 4 #define _AP_MBB 5 #define _AP_MBC 6 #define _AP_MBD 7 #define _AP_MCA 8 #define _AP_MCB 9 #define _AP_MCC 10 #define _AP_MCD 11 #define _AP_MDA 12 #define _AP_MDB 13 #define _AP_MDC 14 #define _AP_MDD 15

    Read the article

  • ArchBeat Link-o-Rama for 2012-03-22

    - by Bob Rhubart
    2012 Real World Performance Tour Dates |Performance Tuning | Performance Engineering www.ioug.org Coming to your town: a full day of real world database performance with Tom Kyte, Andrew Holdsworth, and Graham Wood. Rochester, NY - March 8 Los Angeles, CA - April 30 Orange County, CA - May 1 Redwood Shores, CA - May 3. Oracle Cloud Conference: dates and locations worldwide http://www.oracle.com Find the cloud strategy that’s right for your enterprise. 2 new Cloud Computing resources added to free IT Strategies from Oracle library www.oracle.com IT Strategies from Oracle, the free authorized library of guidelines and reference architectures, has just been updated to include two new documents: A Pragmatic Approach to Cloud Adoption Data Sheet: Oracle's Approach to Cloud SOA! SOA! SOA!; OSB 11g Recipes and Author Interviews www.oracle.com Featured this week on the OTN Architect Homepage, along with the latest articles, white papers, blogs, events, and other resources for software architects. Enterprise app shops announcements are everywhere | Andy Mulholland www.capgemini.com Capgemini's Andy Mulholland discusses "the 'front office' revolution using new technologies in a different manner to the standard role of IT and its attendant monolithic applications based on Client-Server technologies." Encapsulating OIM API’s in a Web Service for OIM Custom SOA Composites | Alex Lopez fusionsecurity.blogspot.com Alex Lopez describes "how to encapsulate OIM API calls in a Web Service for use in a custom SOA composite to be included as an approval process in a request template." Thought for the Day "Don't worry about people stealing your ideas. If your ideas are any good, you'll have to ram them down people's throats." — Howard H. Aiken

    Read the article

  • A Cautionary Tale About Multi-Source JNDI Configuration

    - by scott.s.nelson(at)oracle.com
    Here's a bit of fun with WebLogic JDBC configurations.  I ran into this issue after reading that p13nDataSource and cgDataSource-NonXA should not be configured as multi-source. There were some issues changing them to use the basic JDBC connection string and when rolling back to the bad configuration the server went "Boom".  Since one purpose behind this blog is to share lessons learned, I just had to post this. If you write your descriptors manually (as opposed to generating them using the WLS console) and put a comma-separated list of JNDI addresses like this: <jdbc-data-source-params> <jndi-name>weblogic.jdbc.jts.commercePool,contentDataSource, contentVersioningDataSource,portalFrameworkPool</jndi-name> <algorithm-type>Load-Balancing</algorithm-type> <data-source-list>portalDataSource-rac0,portalDataSource-rac1</data-source-list> <failover-request-if-busy>false</failover-request-if-busy> </jdbc-data-source-params> so long as the first address resolves, it will still work. Sort of.  If you call this connection to do an update, only one node of the RAC instance is updated. Other wonderful side-effects include the server refusing to start sometimes. The proper way to list the JNDI sources is one per node, like this: <jdbc-data-source-params> <jndi-name>weblogic.jdbc.jts.commercePool</jndi-name> <jndi-name>contentDataSource</jndi-name> <jndi-name>contentVersioningDataSource</jndi-name> <jndi-name>portalFrameworkPool</jndi-name> <algorithm-type>Load-Balancing</algorithm-type> <data-source-list>portalDataSource-rac0, portalDataSource-rac1, portalDataSource-rac2 </data-source-list> <failover-request-if-busy>false</failover-request-if-busy> </jdbc-data-source-params>(Props to Sandeep Seshan for locating the root cause)

    Read the article

  • SQL Azure Roadmap gets a little clearer &ndash; announcements from Tech Ed

    - by Eric Nelson
    On Monday at Tech?Ed 2010 we announced new stuff (I like new stuff) that “showcases our continued commitment to deliver value, flexibility and control of data through data cloud services to our customers”. Ok, that does sound like marketing speak (and it is) but the good news is there is some meat behind it. We have some decent new features coming and we also have some clarity on when we will be able to get our hands on those features. SQL Azure Business Edition Extends to 50 GB – June 28th SQL Azure Business Edition database is now extending from 10GB to 50GB The new 50GB database size will be available worldwide starting June 28th SQL Azure Business Edition Subscription Offer – August 1st Starting August 1st, we will have a new discounted SQL Azure promotional offer (SQL Azure Development Accelerator Core) More information is available at http://www.microsoft.com/windowsazure/offers/. Public Preview of the Data Sync Service  - CTP now Data Sync Service for SQL Azure allows for more flexible control over data by deciding which data components should be distributed across multiple datacenters in different geographic locations, based on your internal policies and business needs.  Available as a community technology preview after registering at http://www.sqlazurelabs.com SQL Server Web Manager for SQL Azure - CTP this Summer SQL Server Web Manager (SSWM) is a lightweight and easy to use database management tool for SQL Azure databases, to be offered this summer. Access 10 Support for SQL Azure – available now Yey – at last! Microsoft Office 2010 will natively support data connectivity to SQL Azure – we can now start developing those “departmental apps” with the confidence of a highly available SQL store provisioned in seconds. NB: I don’t believe we will support any previous versions of Access talking to SQL Azure. The Pre-announced Spatial Data Support to Become Live – Live now* At MIX in March we announced spatial was coming and apparently it is now here - although I need to check. Related Links UK based? Sign up at http://ukazure.ning.com SQL Azure Team Blog http://blogs.msdn.com/b/sqlazure/

    Read the article

  • AT&T Application Resource Analyzer in NetBeans IDE

    - by Geertjan
    Here at Øredev in Malmö I met Doug Sillars who does developer outreach for the AT&T Application Resource Optimizer. In this YouTube clip you see Doug explaining how it works and what it can do for optimizing performance of mobile applications. There's a free and open source Android app on GitHub that you can install on Android to collect data and then there's a Java Swing application for analyzing the results. And here's what that application looks like as a plugin in NetBeans IDE, click to enlarge the image, which shows the Android sources of the Data Collector, as well as the Data Analyzer ready to be used to collect data: Since the ARO Data Analyzer is written in Java and has JPanels defining its UI layer, integrating the user interface wasn't hard. Now working on the Actions, so there'll be a new ARO menu with start/stop data collecting menu items, etc, reusing as much of the original code as possible. That part is actually already working. I started up an Android emulator, then started the data collection process from the IDE. Now need to include the Actions for importing the data into the analyzer, together with a few other related features. A pretty cool feature in ARO is video capture, so that a movie can be made by ARO of all the steps taken on the device during the collection process, which will also be nice to have integrated into the NetBeans plugin. Ultimately, this will be handy for anyone creating Android applications in NetBeans IDE since they'll be able to use AT&T's ARO tool for optimizing the performance of the applications they're developing. It will also be useful for those using the built-in Cordova tools in NetBeans IDE to create iOS applications because ARO is also applicable to analyzing iOS application performance.

    Read the article

  • SSIS Denali as part of “Enterprise Information Management”

    - by jorg
    When watching the SQL PASS session “What’s Coming Next in SSIS?” of Steve Swartz, the Group Program Manager for the SSIS team, an interesting question came up: Why is SSIS thought of to be BI, when we use it so frequently for other sorts of data problems? The answer of Steve was that he breaks the world of data work into three parts: Process of inputs BI   Enterprise Information Management All the work you have to do when you have a lot of data to make it useful and clean and get it to the right place. This covers master data management, data quality work, data integration and lineage analysis to keep track of where the data came from. All of these are part of Enterprise Information Management. Next, Steve told Microsoft is developing SSIS as part of a large push in all of these areas in the next release of SQL. So SSIS will be, next to a BI tool, part of Enterprise Information Management in the next release of SQL Server. I'm interested in the different ways people use SSIS, I've basically used it for ETL, data migrations and processing inputs. In which ways did you use SSIS?

    Read the article

  • Working with QuickBooks using LINQPad

    - by dataintegration
    The RSSBus ADO.NET Providers can be used from many applications and development environments. In this article, we show how to use LINQPad to connect to QuickBooks using the RSSBus ADO.NET Provider for QuickBooks. Although this example uses the QuickBooks Data Provider, the same process applies to any of our ADO.NET Providers. Create the Data Model Step 1: Download and install both the Data Provider from RSSBus and LINQPad (available at www.linqpad.net Step 2: Create a new project in Visual Studio and create a data model for it using the ADO.NET Entity Data Model wizard. Step 3: Create a new connection by clicking "New Connection", specify the connection string options, and click Next. Step 4: Select the desired tables and views and click Finish to create the data model. Step 5: Right click on the entity diagram and select 'Add Code Generation Item'. Choose the 'ADO.NET DbContext Generator'. Step 6: Now build the project. The generated files can be used to create a QuickBooks connection in LINQPad. Create the connection to QuickBooks in LINQPad Step 7:Open LINQPad and click 'Add New Connection'. Step 8: Choose 'Entity Framework DbContext POCO'. Step 9: Choose the data model assembly ('.dll') created by Visual Studio as the 'Path to Custom Assembly'. Choose the name of the custom DbContext, the path to the config file, and assign a name to the connection that will allow you to recognize its purpose. Step 10: Congratulations! Now you have a connection to QuickBooks, and you can query data through LINQPad.

    Read the article

  • Site Web Analytics not updating Sharepoint 2010

    - by Rohit Gupta
    If you facing the issue that the web Analytics Reports in SharePoint 2010 Central Administration is not updating data. When you go to your site > site settings > Site Web Analytics reports or Site Collection Analytics reports  You get old data as in the ribbon displayed "Data Last Updated: 12/13/2010 2:00:20 AM" Please insure that the following things are covered: Insure that Usage and Data Health Data Collection service is configured correctly. Log Collection Schedule is configured correctly Microsoft Sharepoint Foundation Usage Data Import and Microsoft SharePoint Foundation Usage Data Processing Timer jobs are configured to run at regular intervals One last important Timer job is the Web Analytics Trigger Workflows Timer Job insure that this timer job is enabled and scheduled to run at regular intervals (for each site that you need analytics for). After you have insured that the web analytics service configuration is working fine and the Usage Data Import job is importing the *.usage files from the ULS LOGS folder into the WSS_Logging database, and that all the required timer jobs are running as expected… wait for a day for the report to get updated… the report gets updated automatically at 2:00 am in the morning… and i could not find a way to control the schedule for this report update job. So be sure to wait for a day before giving up :)

    Read the article

  • flat files vs. RDBMS database, few read/writes, few changes

    - by Bob Lapique
    I have to handle data from long term (years, decades) climate monitoring stations. The data flow usually starts with raw data (voltages, etc.) plus quality check information (pressure, temperature, flow rate, etc.) generally recorded @ 1Hz. Then, the data are assigned a quality flag (human and/or program), processed (apply calibration curves) and flagged. So, we basically end up with 2 datasets : raw and processed data. New data are typically added once a day (~500Ko/day/instrument). Simultaneous queries are not likely to ever happen. I wanted to go for a RDBMS (we have a MySQL server) and have some experience in database design, but the IT guy keeps telling me that flat files will to the job just as well. I suspect him to try to make his life easier when it comes to backup/upgrade the MySQL. There are not so many links between data, they don't change much, but the quality flags will change. A RDBMS is easier to compare data from different instruments on a "many days" scale, compared to daily text files. Well, what would you advise ? Thanks.

    Read the article

  • Bundling in visual studio 2012 for web optimization

    - by Jalpesh P. Vadgama
    I have been writing a series of posts about Visual Studio 2012 features. This series describes what are the new features in the Visual Studio 2012. This post will also be part of Visual Studio 2012 feature series. As we know now days web applications or site are providing more and more features and due to that we have include lots of JavaScript and CSS files in our web application.So once we load site then we will have all the JavaScript  js files and CSS files loaded in the browsers and If you have lots of JavaScript files then its consumes lots of time when browser request them. Following images show the same situation over there.   Here you can see total 25 files loaded into the system and it's almost more than 1MB of total size. As we need to have our web application of site very responsive and need to have high performance application/site, this will be a performance bottleneck to our site. In situation like this, the bundling feature of Visual Studio 2012 and ASP.NET 4.5 comes very handy. With the help of this feature we do optimization there and we can increase performance of our application. To enable this feature in Visual Studio 2012 we just made debug=”false” in web.config of our application like following. Now once you enable this feature and run this application in the browser to see your traffic it will have less items like following. As you can see in the above image there are only 8 items. So after enabling bundling it will automatically convert all js and css files into the one request. Isn’t that cool feature? This feature will surely going to have great impact on performance. Hope you like it. Stay tuned for more.. Till then happy programming!!

    Read the article

  • How to Set Up a MongoDB NoSQL Cluster Using Oracle Solaris Zones

    - by Orgad Kimchi
    This article starts with a brief overview of MongoDB and follows with an example of setting up a MongoDB three nodes cluster using Oracle Solaris Zones. The following are benefits of using Oracle Solaris for a MongoDB cluster: • You can add new MongoDB hosts to the cluster in minutes instead of hours using the zone cloning feature. Using Oracle Solaris Zones, you can easily scale out your MongoDB cluster. • In case there is a user error or software error, the Service Management Facility ensures the high availability of each cluster member and ensures that MongoDB replication failover will occur only as a last resort. • You can discover performance issues in minutes versus days by using DTrace, which provides increased operating system observability. DTrace provides a holistic performance overview of the operating system and allows deep performance analysis through cooperation with the built-in MongoDB tools. • ZFS built-in compression provides optimized disk I/O utilization for better I/O performance. In the example presented in this article, all the MongoDB cluster building blocks will be installed using the Oracle Solaris Zones, Service Management Facility, ZFS, and network virtualization technologies. Figure 1 shows the architecture:

    Read the article

  • Problems with Ubuntu and AMD A10-4655M APU

    - by Robert Hanks
    I have a new HP Sleekbook 6z with AMD A10-4655M APU. I tried installing Ubuntu with wubi--the first attempt ended up with a 'AMD unsupported hardware' watermark that I wasn't able to remove (the appeared when I tried to update the drivers as Ubuntu suggested) On the second attempted install Ubuntu installed (I stayed away from the suggested drivers) but the performance was extremely poor----as in Windows Vista poor. I am not sure what the solution is--if I need to wait until there is a kernel update with Ubuntu or if there are other solutions--I realise this is a new APU for the market. I would love to have Ubuntu 12.04 up and running--Windows 7 does very well with this new processor so Ubuntu should, well, be lightening speed. The trial on the Sleekbook with Ubuntu 12.10 Alpha 2 release was a complete failure. I created a bootable USB. By using either the 'Try Ubuntu' or 'Install Ubuntu' options resulted in the usual purple Ubuntu splash screen, followed by nothing...as in a black screen without any hint of life. Interestingly one can hear the Ubuntu intro sound. In case you are wondering, this same USB was trialed subsequently on another computer with and Intel Atom Processor. Worked flawlessly. Lastly the second trial on the Sleekbook resulted in the same results as the first paragraph. Perhaps 12.10 Beta will overcome this issue, or the finalised 12.10 release in October. I don't have the expertise to know what the cause of the behaviour is-the issue could be something else entirely. Sadly, the Windows 7 performance is very good with this processor-very similar and in some instances better to the 2nd generation Intel i5 based computer I use at my workplace. Whatever the cause is for the performance with Ubuntu 12.04 or 12.10 Alpha 2, the situation doesn't bode well for Ubuntu. Ubuntu aside, the HP Sleekbook is a good performer for the price. I am certain once the Ubuntu issue is worked on and solutions arise, the Ubuntu performance will probably be better than ever.

    Read the article

  • The remote server returned an error: 227 Entering Passive Mode

    - by hmloo
    Today while uploading file to FTP sever, the codes throw an error - "The remote server returned an error: 227 Entering Passive Mode", after research, I got some knowledge in FTP working principle. FTP may run in active or passive mode, which determines how the data connection is established. Active mode: command connection: client >1024  -> server 21 data connection:    client >1024  <-  server 20 passive mode: command connection: client > 1024 -> server 21 data connection:    client > 1024 <- server > 1024 In active mode, the client connects from a random unprivileged port (N > 1023) to the FTP server's command port(default port 21). If the client needs to transfer data, the client will use PORT command to tell the server:"hi, I opened port XXXX, please connect to me." and then server will use port 20 to initiate the data connection to that client port number. In passive mode, the client connects from a random unprivileged port (N > 1023) to the FTP server's command port(default port 21). If the client needs to transfer data, the sever will tell the client:"hi, I opened port XXXX , please connect to me." and then client will initiate the data connection to that sever port number. In a nutshell, active mode is used to have the server connect to the client, and passive mode is used to have the client connect to the server. So if your FTP server is configured to work in active mode only or the firewalls between your client and the server are blocking the data port range, then you will get error message, to fix this issue, just set System.Net.FtpWebRequest property UsePassive = false. Hope this helps! Thanks for reading!

    Read the article

  • Recommended storage scheme for home server? (LVM/JBOD/RAID 5...)

    - by j-g-faustus
    Are there any guidelines for which storage scheme(s) makes most sense for a multiple-disk home server? I am assuming a separate boot/OS disk (so bootability is not a concern, this is for data storage only) and 4-6 storage disks of 1-2 TB each, for a total storage capacity in the range 4-12 TB. The file system is ext4, I expect there will be only one big partition spanning all disks. As far as I can tell, the alternatives are individual disks pros: works with any combination of disk sizes; losing a disk loses only the data on that disk; no need for volume management. cons: data management is clumsy when logical units (like a "movies" folder) are larger than the capacity of any single drive. JBOD span pros: can merge disks of any size. cons: losing a disk loses all data on all disks LVM pros: can merge disks of any size; relatively simple to add and remove disks. cons: losing a disk loses all data on all disks RAID 0 pros: speed cons: losing one drive loses all data; disks must be same size RAID 5 pros: data survives losing one disk cons: gives up one disk worth of capacity; disks must be same size RAID 6 pros: data survives losing two disks cons: gives up two disks worth of capacity; disks must be same size I'm primarily considering either LVM or JBOD span simply because it will let me reuse older, smaller-capacity disks when I upgrade the system. The runner-up is RAID 0 for speed. I'm planning on having full backups to a separate system, so I expect the extra redundancy from RAID levels 5 or 6 won't be important. Is this a fair representation of the alternatives? Are there other considerations or alternatives I have missed? And what would you recommend?

    Read the article

  • I/O errors are reported when I try to install Ubuntu, but the SMART data is good. Is my hard disk dying?

    - by James
    When I try to install linux, it tells me there is an input output error on dev sda. I have tried both Ubuntu and Mint on two different computers. So that narrows it down to the hdd. After hours of googling and trying different things I tried making the hardrive ext4 with gparted but that comes up with an error. This makes me think that the hdd is bad. There are a few reasons I think the hdd isn't bad. I can use the hdd in windows fully. Windows and gparted disk health checks both say it is fine. Its SMART data is all good. So... help?

    Read the article

  • Exalogic Elastic Cloud Software (EECS) version 2.0.1 available

    - by JuergenKress
    We are pleased to announce that as of today (May 14, 2012) the Exalogic Elastic Cloud Software (EECS) version 2.0.1 has been made Generally Available. This release is the culmination of over two and a half years of engineering effort from an extended team spanning 18 product development organizations on three continents, and is the most powerful, sophisticated and comprehensive Exalogic Elastic Cloud Software release to date. With this new EECS release, Exalogic customers now have an ideal platform for not only high-performance and mission critical applications, but for standardization and consolidation of virtually all Oracle Fusion Middleware, Fusion Applications, Application Unlimited and Oracle GBU Applications. With the release of EECS 2.0.1, Exalogic is now capable of hosting multiple concurrent tenants, business applications and middleware deployments with fine-grained resource management, enterprise-grade security, unmatched manageability and extreme performance in a fully virtualized environment. The Exalogic Elastic Cloud Software 2.0.1 release brings important new technologies to the Exalogic platform: Exalogic is now capable of hosting multiple concurrent tenants, business applications and middleware deployments with fine-grained resource management, enterprise-grade security, unmatched manageabi! lity and extreme performance in a fully virtualized environment. Support for extremely high-performance x86 server virtualization via a highly optimized version of Oracle VM 3.x. A rich, fully integrated Infrastructure-as-a-Service management system called Exalogic Control which provides graphical, command line and Java interfaces that allows Cloud Users, or external systems, to create and manage users, virtual servers, virtual storage and virtual network resources. Webcast Series: Rethink Your Business Application Deployment Strategy Redefining the CRM and E-Commerce Experience with Oracle Exalogic, 7-Jun@10am PT & On-Demand: ‘The Road to a Cloud-Enabled, Infinitely Elastic Application Infrastructure’ (featuring Gartner Analysts). WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: ExaLogic Elastic Cloud,ExaLogic,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress,ExaLogic 2.0.1

    Read the article

  • The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on di

    - by simonsabin
    Are you trying to build a SQL Server database project and getting   The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk. We had this recently when trying to build one SSDT solution but not when building another.   Checking the build agent the error was correct that file didn’t exist...(read more)

    Read the article

  • Preparing for Interview: I feel like, I wanna start coding all data structures again.... Is it a right way to start?

    - by howtechstuffworks
    I started preparing for my interview. I will be graduating this May. Right now, I am doing an internship in an product based company, so I did my fair share of preparation. The thing is, whenever, I try to do something, I always think that I should start everything from page 1. It is sometimes not at all helpful.... For example, reading a programming book from page 1 is not going to helpful, as I never completed one and always stopped half way through or rushed through towards deadlines. I am facing a similar situation now. I feel like, I wanna start coding all data structures from first(I attempted and finished atleast 6-7 last fall, preparing for interview). Now I feel like, I wanna code everything from first, instead of attempting something new.... I am not quite sure, what this is, but this is the way I have been..... LOL

    Read the article

  • Architecture for dashboard showing aggregated stats [on hold]

    - by soulnafein
    I'd like to know what are common architectural pattern for the following problem. Web application A has information on sales, users, responsiveness score, etc. Some of this information are computationally intensive and or have a complex business logic (e.g. responsiveness score). I'm building a separate application (B) for internal admin tasks that modifies data in web application A and report on data from web application A. For writing I'm planning to use a restful api. E.g. create a new entity, update entity, etc. In application B I'd like to show some graphs and other aggregate data for the previous 12 months. I'm planning to store the aggregate data for each month in redis. Some data should update more often, e.g every 10 minutes. I can think of 3 ways of doing this. A scheduled task in app B that connects to an api of app A that provides some aggregated data. Then app B stores it in Redis and use that to visualise pages. Cons: it makes complex calculation within a web request, requires lot's of work e.g. api server and client, storing, etc., pros: business logic still lives in app A. A scheduled task in app A that aggregates data in an non-web process and stores it directly in Redis to be accessed by app B. A scheduled task in app A that aggregates data in a non-web process and uses an api in app B to save it. I'd like to know if there is a well known architectural solution to this type of problems and if not what are other pros/cons for the solution I've suggested?

    Read the article

  • Selecting objects on a 2D map

    - by Dave
    I have a object selection function by checking the mouse click and getting the relevant object. How ever there is a rare situation where if one object is partially behind the other then both objects are in the given area so im wondering how I can make the game know which one was selected, as currently my method does not know. This is my function that works it out: function getobj(e){ mx = e.pageX - curleft; //mouse click x my = e.pageY - curtop; //mouse click y function searchSprites(sprites, x, y) { var matches = [], i = 0, data = null; for (i = 0; i < spritea.length; ++i) { data = spritea[i].data; if (x > data[0] && y > data[1] && x < data[2] && y < data[3]) { var imageData = ctx2.getImageData(x, y, 1, 1); if(imageData.data[3] !== 0){ return [spritea[i].id]; i = spritea.length; } } } } res = searchSprites(spritea, mx, my); bid = res[0]; if(bid === '1'){ alert('You selected the skyscraper in front!'); }else if(bid === '3'){ alert('You selected the skyscraper behind!'); } } Image of the map: It keeps telling me I clicked the skyscraper behind which is not necessarily what the user is trying to do. How can I improve the accuracy of this ?

    Read the article

  • Series On Embedded Development (Part 1)

    - by user12612705
    This is the first in a series of entries on developing applications for the embedded environment. Most of this information is relevant to any type of embedded development (and even for desktop and server too), not just Java. This information is based on a talk Hinkmond Wong and I gave at JavaOne 2012 entitled Reducing Dynamic Memory in Java Embedded Applications. One thing to remember when developing embeddded applications is that memory matters. Yes, memory matters in desktop and server environments as well, but there's just plain less of it in embedded devices. So I'm going to be talking about saving this precious resource as well as another precious resource, CPU cycles...and a bit about power too. CPU matters too, and again, in embedded devices, there's just plain less of it. What you'll find, no surprise, is that there's a trade-off between performance and memory. To get better performance, you need to use more memory, and to save more memory, you need to need to use more CPU cycles. I'll be discussing three Memory Reduction Categories: - Optionality, both build-time and runtime. Optionality is about providing options so you can get rid of the stuff you don't need and include the stuff you do need. - Tunability, which is about providing options so you can tune your application by trading performance for size, and vice-versa. - Efficiency, which is about balancing size savings with performance.

    Read the article

  • Help with selecting objects on a map

    - by Dave
    I have a object selection function by checking the mouse click and getting the relevant object. How ever there is a rare situation where if one object is partially behind the other then both objects are in the given area so im wondering how i can make the game know which one was selected, as currently my method does not know. This is my function that works it out: function getobj(e){ mx = e.pageX - curleft; //mouse click x my = e.pageY - curtop; //mouse click y function searchSprites(sprites, x, y) { var matches = [], i = 0, data = null; for (i = 0; i < spritea.length; ++i) { data = spritea[i].data; if (x > data[0] && y > data[1] && x < data[2] && y < data[3]) { var imageData = ctx2.getImageData(x, y, 1, 1); if(imageData.data[3] !== 0){ return [spritea[i].id]; i = spritea.length; } } } } res = searchSprites(spritea, mx, my); bid = res[0]; if(bid === '1'){ alert('You selected the skyscraper in front!'); }else if(bid === '3'){ alert('You selected the skyscraper behind!'); } } Image of the map: http://i.imgur.com/qcKij.jpg It keeps telling me i clicked the skyscraper behind which is not necessarily what the user is trying to do... how can i improve the accuracy of this ?

    Read the article

< Previous Page | 776 777 778 779 780 781 782 783 784 785 786 787  | Next Page >