Search Results

Search found 62870 results on 2515 pages for 'usage data'.

Page 57/2515 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • How do I free up more space in /boot?

    - by user6722
    My /boot partition is nearly full and I get a warning every time I reboot my system. I already deleted old kernel packages (linux-headers...), actually I did that to install a newer kernel version that came with the automatic updates. After installing that new version, the partition is nearly full again. So what else can I delete? Are there some other files associated to the old kernel images? Here is a list of files that are on my /boot partition: :~$ ls /boot/ abi-2.6.31-21-generic lost+found abi-2.6.32-25-generic memtest86+.bin abi-2.6.38-10-generic memtest86+_multiboot.bin abi-2.6.38-11-generic System.map-2.6.31-21-generic abi-2.6.38-12-generic System.map-2.6.32-25-generic abi-2.6.38-8-generic System.map-2.6.38-10-generic abi-3.0.0-12-generic System.map-2.6.38-11-generic abi-3.0.0-13-generic System.map-2.6.38-12-generic abi-3.0.0-14-generic System.map-2.6.38-8-generic boot System.map-3.0.0-12-generic config-2.6.31-21-generic System.map-3.0.0-13-generic config-2.6.32-25-generic System.map-3.0.0-14-generic config-2.6.38-10-generic vmcoreinfo-2.6.31-21-generic config-2.6.38-11-generic vmcoreinfo-2.6.32-25-generic config-2.6.38-12-generic vmcoreinfo-2.6.38-10-generic config-2.6.38-8-generic vmcoreinfo-2.6.38-11-generic config-3.0.0-12-generic vmcoreinfo-2.6.38-12-generic config-3.0.0-13-generic vmcoreinfo-2.6.38-8-generic config-3.0.0-14-generic vmcoreinfo-3.0.0-12-generic extlinux vmcoreinfo-3.0.0-13-generic grub vmcoreinfo-3.0.0-14-generic initrd.img-2.6.31-21-generic vmlinuz-2.6.31-21-generic initrd.img-2.6.32-25-generic vmlinuz-2.6.32-25-generic initrd.img-2.6.38-10-generic vmlinuz-2.6.38-10-generic initrd.img-2.6.38-11-generic vmlinuz-2.6.38-11-generic initrd.img-2.6.38-12-generic vmlinuz-2.6.38-12-generic initrd.img-2.6.38-8-generic vmlinuz-2.6.38-8-generic initrd.img-3.0.0-12-generic vmlinuz-3.0.0-12-generic initrd.img-3.0.0-13-generic vmlinuz-3.0.0-13-generic initrd.img-3.0.0-14-generic vmlinuz-3.0.0-14-generic Currently, I'm using the 3.0.0-14-generic kernel.

    Read the article

  • PHP usage outside the web?

    - by Anto
    As you probably are aware, PHP is not only usable for web programming, but also desktop programming. It even has things such as GTK bindings. Do you have any examples of places where PHP is actually used outside web programming for anything more than just very trivial programs? Do you know of any desktop program which uses PHP to some extent (e.g. as Python could be used in a C program)? Note: I don't program in PHP myself, I'm just curious

    Read the article

  • Optimizing Memory Usage in a .NET Application with ANTS Memory Profiler

    Most people have encountered an OutOfMemory problem at some point or other, and these people know that tracking down the source of the problem is often a time-consuming and frustrating task. Florian Standhartinger gives us a walkthrough of how he used the ANTS Memory Profiler to help make an otherwise painful task that little bit less troublesome.

    Read the article

  • Regarding Microsoft MVC framework and usage [closed]

    - by Thomas
    it will be better if some one tell me that what type of web application should develop using Microsoft MVC framework. i am familiar with web form but not familiar with MS MVC framework. i feel any type of web application can be developed with web form. i search google lot to know the specific reason for using MS MVC framework. i am keen interested to know when i should develop web apps using MS MVC framework and when i should use web form. i will be happy if some one discuss this issue in detail. thanks

    Read the article

  • Database Schema Usage

    - by CrazyHorse
    I have a question regarding the appropriate use of SQL Server database schemas and was hoping that some database gurus might be able to offer some guidance around best practice. Just to give a bit of background, my team has recently shrunk to 2 people and we have just been merged with another 6 person team. My team had set up a SQL Server environment running off a desktop backing up to another desktop (and nightly to the network), whilst the new team has a formal SQL Server environment, running on a dedicated server, with backups and maintenance all handled by a dedicated team. So far it's good news for my team. Now to the query. My team designed all our tables to belong to a 3-letter schema name (e.g. User = USR, General = GEN, Account = ACC) which broadly speaking relate to specific applications, although there is a lot of overlap. My new team has come from an Access background and have implemented their tables within dbo with a 3-letter perfix followed by "_tbl" so the examples above would be dbo.USR_tblTableName, dbo.GEN_tblTableName and dbo.ACC_tblTableName. Further to this, neither my old team nor my new team has gone live with their SQL Servers yet (we're both coincidentally migrating away from Access environments) and the new team have said they're willing to consider adopting our approach if we can explain how this would be beneficial. We are not anticipating handling table updates at schema level, as we will be using application-level logins. Also, with regards to the unwieldiness of the 7-character prefix, I'm not overly concerned myself as we're using LINQ almost exclusively so the tables can simply be renamed in the DMBL (although I know that presents some challenges when we update the DBML). So therefore, given that both teams need to be aligned with one another, can anyone offer any convincing arguments either way?

    Read the article

  • Why is the hard drive still full after deleting some files?

    - by julio
    I have a server running Ubuntu Server 12.xx. Today some services stopped and I found some messages about full disk, so I ran df -h: Filesystem Size Used Disp Use% /dev/mapper/ubuntu-root 455G 434G 0 100% / udev 1,7G 4,0K 1,7G 1% /dev tmpfs 689M 4,2M 685M 1% /run none 5,0M 0 5,0M 0% /run/lock none 1,7G 0 1,7G 0% /run/shm /dev/sda1 228M 51M 166M 24% /boot overflow 1,0M 0 1,0M 0% /tmp I tried to delete some files remotely from a Windows computer by right-clicking and choosing "delete", but the hard drive remained full. Is there a Trash folder in Ubuntu Server? What could be happening?

    Read the article

  • Sort Data in Windows Phone using Collection View Source

    - by psheriff
    When you write a Windows Phone application you will most likely consume data from a web service somewhere. If that service returns data to you in a sort order that you do not want, you have an easy alternative to sort the data without writing any C# or VB code. You use the built-in CollectionViewSource object in XAML to perform the sorting for you. This assumes that you can get the data into a collection that implements the IEnumerable or IList interfaces.For this example, I will be using a simple Product class with two properties, and a list of Product objects using the Generic List class. Try this out by creating a Product class as shown in the following code:public class Product {  public Product(int id, string name)   {    ProductId = id;    ProductName = name;  }  public int ProductId { get; set; }  public string ProductName { get; set; }}Create a collection class that initializes a property called DataCollection with some sample data as shown in the code below:public class Products : List<Product>{  public Products()  {    InitCollection();  }  public List<Product> DataCollection { get; set; }  List<Product> InitCollection()  {    DataCollection = new List<Product>();    DataCollection.Add(new Product(3,        "PDSA .NET Productivity Framework"));    DataCollection.Add(new Product(1,        "Haystack Code Generator for .NET"));    DataCollection.Add(new Product(2,        "Fundamentals of .NET eBook"));    return DataCollection;  }}Notice that the data added to the collection is not in any particular order. Create a Windows Phone page and add two XML namespaces to the Page.xmlns:scm="clr-namespace:System.ComponentModel;assembly=System.Windows"xmlns:local="clr-namespace:WPSortData"The 'local' namespace is an alias to the name of the project that you created (in this case WPSortData). The 'scm' namespace references the System.Windows.dll and is needed for the SortDescription class that you will use for sorting the data. Create a phone:PhoneApplicationPage.Resources section in your Windows Phone page that looks like the following:<phone:PhoneApplicationPage.Resources>  <local:Products x:Key="products" />  <CollectionViewSource x:Key="prodCollection"      Source="{Binding Source={StaticResource products},                       Path=DataCollection}">    <CollectionViewSource.SortDescriptions>      <scm:SortDescription PropertyName="ProductName"                           Direction="Ascending" />    </CollectionViewSource.SortDescriptions>  </CollectionViewSource></phone:PhoneApplicationPage.Resources>The first line of code in the resources section creates an instance of your Products class. The constructor of the Products class calls the InitCollection method which creates three Product objects and adds them to the DataCollection property of the Products class. Once the Products object is instantiated you now add a CollectionViewSource object in XAML using the Products object as the source of the data to this collection. A CollectionViewSource has a SortDescriptions collection that allows you to specify a set of SortDescription objects. Each object can set a PropertyName and a Direction property. As you see in the above code you set the PropertyName equal to the ProductName property of the Product object and tell it to sort in an Ascending direction.All you have to do now is to create a ListBox control and set its ItemsSource property to the CollectionViewSource object. The ListBox displays the data in sorted order by ProductName and you did not have to write any LINQ queries or write other code to sort the data!<ListBox    ItemsSource="{Binding Source={StaticResource prodCollection}}"   DisplayMemberPath="ProductName" />SummaryIn this blog post you learned that you can sort any data without having to change the source code of where the data comes from. Simply feed the data into a CollectionViewSource in XAML and set some sort descriptions in XAML and the rest is done for you! This comes in very handy when you are consuming data from a source where the data is given to you and you do not have control over the sorting.NOTE: You can download this article and many samples like the one shown in this blog entry at my website. http://www.pdsa.com/downloads. Select “Tips and Tricks”, then “Sort Data in Windows Phone using Collection View Source” from the drop down list.Good Luck with your Coding,Paul Sheriff** SPECIAL OFFER FOR MY BLOG READERS **We frequently offer a FREE gift for readers of my blog. Visit http://www.pdsa.com/Event/Blog for your FREE gift!

    Read the article

  • Connect to QuickBooks from PowerBuilder using RSSBus ADO.NET Data Provider

    - by dataintegration
    The RSSBus ADO.NET providers are easy-to-use, standards based controls that can be used from any platform or development technology that supports Microsoft .NET, including Sybase PowerBuilder. In this article we show how to use the RSSBus ADO.NET Provider for QuickBooks in PowerBuilder. A similar approach can be used from PowerBuilder with other RSSBus ADO.NET Data Providers to access data from Salesforce, SharePoint, Dynamics CRM, Google, OData, etc. In this article we will show how to create a basic PowerBuilder application that performs CRUD operations using the RSSBus ADO.NET Provider for QuickBooks. Step 1: Open PowerBuilder and create a new WPF Window Application solution. Step 2: Add all the Visual Controls needed for the connection properties. Step 3: Add the DataGrid control from the .NET controls. Step 4:Configure the columns of the DataGrid control as shown below. The column bindings will depend on the table. <DataGrid AutoGenerateColumns="False" Margin="13,249,12,14" Name="datagrid1" TabIndex="70" ItemsSource="{Binding}"> <DataGrid.Columns> <DataGridTextColumn x:Name="idColumn" Binding="{Binding Path=ID}" Header="ID" Width="SizeToHeader" /> <DataGridTextColumn x:Name="nameColumn" Binding="{Binding Path=Name}" Header="Name" Width="SizeToHeader" /> ... </DataGrid.Columns> </DataGrid> Step 5:Add a reference to the RSSBus ADO.NET Provider for QuickBooks assembly. Step 6:Optional: Set the QBXML Version to 6. Some of the tables in QuickBooks require a later version of QuickBooks to support updates and deletes. Please check the help for details. Connect the DataGrid: Once the visual elements have been configured, developers can use standard ADO.NET objects like Connection, Command, and DataAdapter to populate a DataTable with the results of a SQL query: System.Data.RSSBus.QuickBooks.QuickBooksConnection conn conn = create System.Data.RSSBus.QuickBooks.QuickBooksConnection(connectionString) System.Data.RSSBus.QuickBooks.QuickBooksCommand comm comm = create System.Data.RSSBus.QuickBooks.QuickBooksCommand(command, conn) System.Data.DataTable table table = create System.Data.DataTable System.Data.RSSBus.QuickBooks.QuickBooksDataAdapter dataAdapter dataAdapter = create System.Data.RSSBus.QuickBooks.QuickBooksDataAdapter(comm) dataAdapter.Fill(table) datagrid1.ItemsSource=table.DefaultView The code above can be used to bind data from any query (set this in command), to the DataGrid. The DataGrid should have the same columns as those returned from the SELECT statement. PowerBuilder Sample Project The included sample project includes the steps outlined in this article. You will also need the QuickBooks ADO.NET Data Provider to make the connection. You can download a free trial here.

    Read the article

  • Is Tracking Software Usage Illegal?

    - by Graviton
    Let's say if I am doing desktop application, and I am interested to know whether our software really gets used or not. Is it alright to insert in code that tracks whether our software is used, for how long and so on? Note that no person-identifiable information will be collected, all I am interested to know is how frequent and for how long the software is used. The information will be sent to our server for diagnosis. What do you think?

    Read the article

  • SQL SERVER – QUOTED_IDENTIFIER ON/OFF Explanation and Example – Question on Real World Usage

    - by Pinal Dave
    This is a follow up blog post of SQL SERVER – QUOTED_IDENTIFIER ON/OFF and ANSI_NULL ON/OFF Explanation. I wrote that blog six years ago and I had plans that I will write a follow up blog post of the same. Today, when I was going over my to-do list and I was surprised that I had an item there which was six years old and I never got to do that. In the earlier blog post I wrote about exploitation of the Quoted Identifier and ANSI Null. In this blog post we will see a quick example of Quoted Identifier. However, before we continue this blog post, let us see a refresh what both of Quoted Identifider do. QUOTED IDENTIFIER ON/OFF This option specifies the setting for use of double quotes. When this is on, double quotation mark is used as part of the SQL Server identifier (object name). This can be useful in situations in which identifiers are also SQL Server reserved words. In simple words when we have QUOTED IDENTIFIER ON, anything which is wrapped in double quotes becomes an object. E.g. -- The following will work SET QUOTED_IDENTIFIER ON GO CREATE DATABASE "Test1" GO -- The following will throw an error about Incorrect syntax near 'Test2'. SET QUOTED_IDENTIFIER OFF GO CREATE DATABASE "Test2" GO This feature is particularly helpful when we are working with reserved keywords in SQL Server. For example if you have to create a database with the name VARCHAR or INT or DATABASE you may want to put double quotes around your database name and turn on quoted identifiers to create a database with the such name. Personally, I do not think so anybody will ever create a database with the reserve keywords intentionally, as it will just lead to confusion. Here is another example to give you further clarity about how Quoted Idenifier setting works with SELECT statement. -- The following will throw an error about Invalid column name 'Column'. SET QUOTED_IDENTIFIER ON GO SELECT "Column" GO -- The following will work SET QUOTED_IDENTIFIER OFF GO SELECT "Column" GO Personally, I always use the following method to create database as it works irrespective of what is the quoted identifier’s status. It always creates objects with my desire name whenever I would like to create. CREATE DATABASE [Test3] I believe the future of the quoted identifier on or off is useful in the real world when we have script generated from another database where this setting was ON and we have to now execute the same script again in our environment again. Question to you - I personally have never used this feature as I mentioned earlier. I believe this feature is there to support the scripts which are generated in another SQL Database or generate the script for other database. Do you have a real world scenario where we need to turn on or off Quoted Identifiers. Click to Download Scripts Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Where did my free space go?

    - by Ari B. Friedman
    I have a storage drive (2TB) and an OS drive (90GB SSD). I've run out of space on the OS drive: /$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb1 72G 72G 0 100% / udev 5.9G 12K 5.9G 1% /dev tmpfs 2.4G 1.2M 2.4G 1% /run none 5.0M 0 5.0M 0% /run/lock none 5.9G 428K 5.9G 1% /run/shm /dev/sda1 1.9T 639G 1.2T 37% /media/StorageDrive So be it. But when I attempt to figure out where the space has gone, I cannot find it anything remotely approaching the capacity of the drive: /$ sudo du -h -d 1 du: cannot access `./media/StorageDrive/home/ari/.gvfs': Permission denied 675G ./media 2.3G ./var 0 ./proc 7.0M ./tmp 27M ./boot 4.0K ./lib64 12K ./dev 44M ./home 16K ./lost+found 8.0M ./sbin 223M ./lib 4.0K ./selinux 1.4M ./run 140K ./root 8.8M ./bin 4.0K ./mnt 38M ./etc 8.0K ./srv 4.8G ./usr 65M ./opt 0 ./sys 682G . Note the difference between the total (682G) and the mounted drives in /media (675G) is only about 9G. How are 72G being used? Where is this dark matter hiding?

    Read the article

  • /tmp shows 690 Mb full, actual size 72 K, Why?

    - by Ankit
    Why is /tmp diretory on my system showing 690 Mb full, whereas du -sh /tmp shows only 72K full. drwxrwxrwt 2 lightdm lightdm 4096 Aug 29 21:49 at-spi2 drwx------ 2 ankit ankit 4096 Aug 29 21:50 keyring-0JTfoY drwx------ 2 ankit ankit 4096 Aug 29 21:44 keyring-rChLLL drwx------ 2 root root 16384 Jul 22 02:10 lost+found drwx------ 2 ankit ankit 4096 Jan 1 1970 orbit-ankit drwx------ 2 lightdm lightdm 4096 Aug 29 21:50 pulse-2L9K88eMlGn7 drwx------ 2 root root 4096 Aug 29 21:44 pulse-PKdhtXMmr18n drwx------ 2 ankit ankit 4096 Aug 29 21:50 pulse-zR1TZUAZfmQW drwx------ 2 ankit ankit 4096 Aug 29 21:44 ssh-dlslOXOq2203 drwx------ 2 ankit ankit 4096 Aug 29 21:50 ssh-MrQQVRyy3316 -rw------- 1 ankit ankit 0 Aug 29 21:45 tmp0qnNG4 -rw------- 1 ankit ankit 0 Aug 29 21:50 tmpVvSMt6 -rw------- 1 ankit ankit 0 Aug 29 21:49 tmpy9Gadz -rw-rw-r-- 1 lightdm lightdm 0 Aug 29 21:44 unity_support_test.0 ankit@duster:/tmp$ df -h df: `/home/ankit/.gvfs': Transport endpoint is not connected Filesystem Size Used Avail Use% Mounted on /dev/sda1 79G 11G 65G 14% / udev 2.9G 4.0K 2.9G 1% /dev tmpfs 1.2G 868K 1.2G 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.9G 220K 2.9G 1% /run/shm /dev/sda7 38G 690M 35G 2% /tmp /dev/sda5 93G 26G 63G 30% /home /dev/sda6 93G 1.6G 87G 2% /boot /dev/sda3 154G 69G 78G 48% /home/mount_150 ankit@duster:/tmp$ ankit@duster:/tmp$ ankit@duster:/tmp$ sudo du -sh /tmp/ 72K

    Read the article

  • Linux kernel regression on power usage

    - by dago
    Webupd8 reported this power management fix for the 2.6.38 Linux kernel regression: Add the following to the boot grub line "pcie_aspm=force" My question - how does this suggested fix differ from this hint from powertop: Suggestion: Enable Device Power Management by pressing the P key, which execute the following action: find /sys/devices/pci* -path "*power/control" -exec bash -c "echo auto > '{}'" \;

    Read the article

  • How long for data highlighter mark up to appear in structured data tool?

    - by Max
    I used the data highlighter in webmaster tools over 3 weeks ago to mark up some local business data, but there is still no structured data being detected in webmaster tools. Does any body have any experience on approx how long it takes for Google Webmaster Tools to start reporting Structured Data that has been marked up with their data highlighter? I'm asking specifically about reporting on it in Web Master Tools Structured Data section, as opposed to actually appearing in the SERPs.

    Read the article

  • EBS Seed Data Comparison Reports Now Available

    - by Steven Chan (Oracle Development)
    Earlier this year we released a reporting tool that reports on the differences in E-Business Suite database objects between one release and another.  That's a very useful reference, but EBS defaults are delivered as seed data within the database objects themselves. What about the differences in this seed data between one release and another? I'm pleased to announce the availability of a new tool that provides comparison reports of E-Business Suite seed data between EBS 11.5.10.2, 12.0.4, 12.0.6, 12.1.1, and 12.1.3.  This new tool complements the information in the data model comparison tool.  You can download the new seed data comparison tool here: EBS ATG Seed Data Comparison Report (Note 1327399.1) The EBS ATG Seed Data Comparison Report provides report on the changes between different EBS releases based upon the seed data changes delivered by the product data loader files (.ldt extension) based on EBS ATG loader control (.lct extension) files.  You can use this new tool to report on the differences in the following types of seed data: Concurrent Program definitions Descriptive Flexfield entity definitions Application Object Library profile option definitions Application Object Library (AOL) key flexfield, function, lookups, value set definitions Application Object Library (AOL) menu and responsibility definitions Application Object Library messages Application Object Library request set definitions Application Object Library printer styles definitions Report Manager / WebADI component and integrator entity definitions Business Intelligence Publisher (BI Publisher) entity definitions BIS Request Set Generator entity definitions ... and more Your feedback is welcomeThis new tool was produced by our hard-working EBS Release Management team, and they're actively seeking your feedback.  Please feel free to share your experiences with it by posting a comment here.  You can also request enhancements to this tool via the distribution list address included in Note 1327399.1.Related Articles Oracle E-Business Suite Release 12.1.3 Now Available New Whitepaper: Upgrading EBS 11i Forms + OA Framework Personalizations to EBS 12 EBS 12.0 Minimum Requirements for Extended Support Finalized Five Key Resources for Upgrading to E-Business Suite Release 12 E-Business Suite Release 12.1.1 Consolidated Upgrade Patch 1 Now Available New Whitepaper: Planning Your E-Business Suite Upgrade from Release 11i to 12.1

    Read the article

  • How to empty swap if there is free RAM?

    - by jfoucher
    When I open a RAM-intensive app (VirtualBox set at 2 Gb RAM), Some swap space is generally used, depending on what else I have open at the time. However, when I quit that last application, the 2 Gb of RAM are freed, but the same swap space use remains. How can I tell ubuntu to stop using that swap and to revert to using the RAM? Thank you Edit : Right now, about 2 hours after having closed VirtualBox, I have 1.6 Gb free RAM and still 770 Mb in swap.

    Read the article

  • PASS Business Intelligence Virtual Chapter Upcoming Sessions (November 2013)

    - by Sergio Govoni
    Let me point out the upcoming live events, dedicated to Business Intelligence with SQL Server, that PASS Business Intelligence Virtual Chapter has scheduled for November 2013. The "Accidental Business Intelligence Project Manager"Date: Thursday 7th November - 8:00 PM GMT / 3:00 PM EST / Noon PSTSpeaker: Jen StirrupURL: https://attendee.gotowebinar.com/register/5018337449405969666 You've watched the Apprentice with Donald Trump and Lord Alan Sugar. You know that the Project Manager is usually the one gets firedYou've heard that Business Intelligence projects are prone to failureYou know that a quick Bing search for "why do Business Intelligence projects fail?" produces a search result of 25 million hits!Despite all this… you're now Business Intelligence Project Manager – now what do you do?In this session, Jen will provide a "sparks from the anvil" series of steps and working practices in Business Intelligence Project Management. What about waterfall vs agile? What is a Gantt chart anyway? Is Microsoft Project your friend or a problematic aspect of being a BI PM? Jen will give you some ideas and insights that will help you set your BI project right: assess priorities, avoid conflict, empower the BI team and generally deliver the Business Intelligence project successfully! Dimensional Modelling Design Patterns: Beyond BasicsDate: Tuesday 12th November - Noon AEDT / 1:00 AM GMT / Monday 11th November 5:00 PM PSTSpeaker: Jason Horner, Josh Fennessy and friendsURL: https://attendee.gotowebinar.com/register/852881628115426561 This session will provide a deeper dive into the art of dimensional modeling. We will look at the different types of fact tables and dimension tables, how and when to use them. We will also some approaches to creating rich hierarchies that make reporting a snap. This session promises to be very interactive and engaging, bring your toughest Dimensional Modeling quandaries. Data Vault Data Warehouse ArchitectureDate: Tuesday 19th November - 4:00 PM PST / 7 PM EST / Wednesday 20th November 11:00 PM AEDTSpeaker: Jeff Renz and Leslie WeedURL: https://attendee.gotowebinar.com/register/1571569707028142849 Data vault is a compelling architecture for an enterprise data warehouse using SQL Server 2012. A well designed data vault data warehouse facilitates fast, efficient and maintainable data integration across business systems. In this session Leslie and I will review the basics about enterprise data warehouse design, introduce you to the data vault architecture and discuss how you can leverage new features of SQL Server 2012 help make your data warehouse solution provide maximum value to your users. 

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • chrome memory and cpu footprint

    - by nmizar
    I've searched the forums for an answer but I couldn't find quite the answer I was looking for [1] , so I thought I it could as well be of interest to more people around here. I carry out a big part of my job on the browser (or for the browser, if you want to put it that way). I tend to use Chrome, because it's got natively many of the newest features that I need (DevTools stuff, mainly but not only). BTW, I'm usually running the last available Chrome version/build on a desktop Vaio with 4GB RAM and dual core CPU and Ubuntu 12.04 as distro and Gnome as window manager. So, I was curious about a) why does Chrome spawn so many threads even opening only three of four tabs and b) is there any way to allocate more memory to Chrome to prevent its performance from degrading? Thanks in advance, Nacho PS [1] I found threads about Chrome freezing or running out of memory but not about the reasons for this being so or for avoiding it to happen. PPS Of course, I could always buy a newer and more capable machine and that is exactly what I'm trying to evaluate: is this a question of outdated hardware or the problem will keep appearing on any (decently but not hugely sized) machine?

    Read the article

  • Swiss Re increases data warehouse performance and deploys in record time

    - by KLaker
    Great information on yet another data warehouse deployment on Exadata. A little background on Swiss Re: In 2002, Swiss Re established a data warehouse for its client markets and products to gather reinsurance information across all organizational units into an integrated structure. The data warehouse provided the basis for reporting at the group level with drill-down capability to individual contracts, while facilitating application integration and data exchange by using common data standards. Initially focusing on property and casualty reinsurance information only, it now includes life and health reinsurance, insurance, and nonlife insurance information. Key highlights of the benefits that Swiss Re achieved by using Exadata: Reduced the time to feed the data warehouse and generate data marts by 58% Reduced average runtime by 24% for standard reports comfortably loading two data warehouse refreshes per day with incremental feeds Freed up technical experts by significantly minimizing time spent on tuning activities Most importantly this was one of the fastest project deployments in Swiss Re's history. They went from installation to production in just four months! What is truly surprising is the that it only took two weeks between power-on to testing the machine with full data volumes! Business teams at Swiss Re are now able to fully exploit up-to-date analytics across property, casualty, life, health insurance, and reinsurance lines to identify successful products. These points are highlighted in the following quotes from Dr. Stephan Gutzwiller, Head of Data Warehouse Services at Swiss Re:  "We were operating a complete Oracle stack, including servers, storage area network, operating systems, and databases that was well optimized and delivered very good performance over an extended period of time. When a hardware replacement was scheduled for 2012, Oracle Exadata was a natural choice—and the performance increase was impressive. It enabled us to deliver analytics to our internal customers faster, without hiring more IT staff" “The high quality data that is readily available with Oracle Exadata gives us the insight and agility we need to cater to client needs. We also can continue re-engineering to keep up with the increasing demand without having to grow the organization. This combination creates excellent business value.” Our full press release is available here: http://www.oracle.com/us/corporate/customers/customersearch/swiss-re-1-exadata-ss-2050409.html. If you want more information about how Exadata can increase the performance of your data warehouse visit our home page: http://www.oracle.com/us/products/database/exadata-database-machine/overview/index.html

    Read the article

  • Why don't %MEM values add up to mem in top?

    - by ben
    I'm currently debugging performance issues with my VPS and for that I'm trying to understand which of the processes eat the most memory. Reading top, here's what I get: Mem: 366544k total, 321396k used, 45148k free, 380k buffers Swap: 1048572k total, 592388k used, 456184k free, 7756k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 12339 ruby 20 0 844m 74m 2440 S 0 20.8 0:24.84 ruby 12363 ruby 20 0 844m 73m 1576 S 0 20.6 0:00.26 ruby 21117 ruby 20 0 171m 33m 1792 S 0 9.3 2:03.98 ruby 11846 ruby 20 0 858m 21m 1820 S 0 6.0 0:09.15 ruby 21277 ruby 20 0 219m 11m 1648 S 0 3.2 2:00.98 ruby 792 root 20 0 266m 10m 1024 S 0 3.0 1:40.06 ruby 532 mysql 20 0 234m 4760 1040 S 0 1.3 0:41.58 mysqld 793 root 20 0 250m 4616 984 S 0 1.3 1:20.55 ruby 586 root 20 0 156m 4532 848 S 0 1.2 6:17.10 god 12315 ruby 20 0 175m 2412 1900 S 0 0.7 0:07.55 ruby 3844 root 20 0 44036 2132 1028 S 0 0.6 1:08.22 ruby 10939 ruby 20 0 179m 1884 1724 S 0 0.5 0:08.33 ruby 4660 ruby 20 0 229m 1592 1440 S 0 0.4 2:55.46 ruby 3879 nobody 20 0 37428 964 520 S 0 0.3 0:01.99 nginx As you can see my memory is about 90% used (which is my issue) but when you add up the %MEM values, it goes to about 50-60% only. Same thing, RES doesn't add up to ~350mb. Why? Am I misunderstanding their meaning? Thanks

    Read the article

  • EMEA Analytics & Data Integration Oracle Partner Forum

    - by Mike.Hallett(at)Oracle-BI&EPM
    MONDAY 12TH NOVEMBER, 2012 IN LONDON (UK) For Oracle Partners across Europe, Middle East and Africa: come to hear the latest news from Oracle OpenWorld about Oracle BI & Data Integration, and propel your business growth as an Oracle partner. This event should appeal to BI or Data Integration specialised partners, Executives, Sales, Pre-sales and Solution architects: with a choice of participation in the plenary day and then a set of special interest (technical) sessions. The follow on breakout sessions from the 13th November provide deeper dives and technical training for those of you who wish to stay for more detailed and hands-on workshops. Keynote: Andrew Sutherland, SVP Oracle Technology Hot agenda items will include: The Fusion Middleware Stack: Engineered to work together A complete Analytics and Data Integration Solution Architecture: Big Data and Little Data combined In-Memory Analytics for Extreme Insight Latest Product Development Roadmap for Data Integration and Analytics Venue:  Oracles London CITY Moorgate Offices Places are limited, Register from this Link {see Register button at bottom right of page}. Note: Registration for the conference and the deeper dives and technical training is free of charge to OPN member Partners, but you will be responsible for your own travel and hotel expenses. Event Schedule During this event you can learn about partner success stories, participate in an array of break-out sessions, exchange information with other partners and enjoy a vibrant panel discussion. Nov. 12th  : Day 1 Main Plenary Session : Full day, starting 10.30 am.     Oracle Hosted Dinner in the Evening Nov. 13th  onwards Architecture Masterclass : IM Reference Architecture – Big Data and Little Data combined (1 day) BI-Apps Bootcamp  (4-days) Oracle GoldenGate workshop (1 day) Oracle Data Integrator and Oracle Enterprise Data Quality workshop (1 day)   For further information and detail download the Agenda (pdf) or contact Michael Hallett at [email protected].

    Read the article

  • How to recover data from NTFS partition that was made into a Swap partition?

    - by Raghav Mehta
    I have extremely important stuff on my windows partition which during the ubuntu 10.10 installation,when it said that I should create something called swap space, I selected it to be a swap space (without even knowing what it actually meant) The Grub2 doesn't show up so I don't get a choice to boot Ubuntu or Windows. I don't get my windows partition as a removable device in Ubuntu either. When I go to disk utility and select the sda2 (i.e.. my windows partition) and click edit partition and select HPFS/NTFS for the type and tick bootable and click OK the small processing sign keep on rotating on the bottom right of the sda2 in the chart and after about 10 to 15 minutes it gives an unknown error and thus, I am still unable to use my windows. I am even worse than a beginner who doesn't know a thing about Ubuntu so please be patient and help me out.

    Read the article

  • Does using structure data semantic LocalBusiness schema markup work for local EMD URL's?

    - by ElHaix
    Based on what I have read about Google's recent Panda and Penguin updates, I'm getting the impression that using semantic markup may help improve SEO results. On a EMD (exact match domain) site, that may have been hit, we list location-based products. We are now going to be adding a itemtype="http://schema.org/Product" to each product, with relevant details. However, that product may be available in Los Angeles and also in appear in a Seattle results page. We could add a LocalBusiness item type on each geo page to define the geo location for that page. While the definition states: A particular physical business or branch of an organization. Examples of LocalBusiness include a restaurant, a particular branch of a restaurant chain, a branch of a bank, a medical practice, a club, a bowling alley, etc. We could add use the location property which would simply include the city/state details. I realize that this looks like it is meant for a physical location, however could this be done without seeming black-hat?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >