Search Results

Search found 95201 results on 3809 pages for 'system data sqlite'.

Page 716/3809 | < Previous Page | 712 713 714 715 716 717 718 719 720 721 722 723  | Next Page >

  • Linux-Containers — Part 1: Overview

    - by Lenz Grimmer
    "Containers" by Jean-Pierre Martineau (CC BY-NC-SA 2.0). Linux Containers (LXC) provide a means to isolate individual services or applications as well as of a complete Linux operating system from other services running on the same host. To accomplish this, each container gets its own directory structure, network devices, IP addresses and process table. The processes running in other containers or the host system are not visible from inside a container. Additionally, Linux Containers allow for fine granular control of resources like RAM, CPU or disk I/O. Generally speaking, Linux Containers use a completely different approach than "classicial" virtualization technologies like KVM or Xen (on which Oracle VM Server for x86 is based on). An application running inside a container will be executed directly on the operating system kernel of the host system, shielded from all other running processes in a sandbox-like environment. This allows a very direct and fair distribution of CPU and I/O-resources. Linux containers can offer the best possible performance and several possibilities for managing and sharing the resources available. Similar to Containers (or Zones) on Oracle Solaris or FreeBSD jails, the same kernel version runs on the host as well as in the containers; it is not possible to run different Linux kernel versions or other operating systems like Microsoft Windows or Oracle Solaris for x86 inside a container. However, it is possible to run different Linux distribution versions (e.g. Fedora Linux in a container on top of an Oracle Linux host), provided it supports the version of the Linux kernel that runs on the host. This approach has one caveat, though - if any of the containers causes a kernel crash, it will bring down all other containers (and the host system) as well. For example, Oracle's Unbreakable Enterprise Kernel Release 2 (2.6.39) is supported for both Oracle Linux 5 and 6. This makes it possible to run Oracle Linux 5 and 6 container instances on top of an Oracle Linux 6 system. Since Linux Containers are fully implemented on the OS level (the Linux kernel), they can be easily combined with other virtualization technologies. It's certainly possible to set up Linux containers within a virtualized Linux instance that runs inside Oracle VM Server for Oracle VM Virtualbox. Some use cases for Linux Containers include: Consolidation of multiple separate Linux systems on one server: instances of Linux systems that are not performance-critical or only see sporadic use (e.g. a fax or print server or intranet services) do not necessarily need a dedicated server for their operations. These can easily be consolidated to run inside containers on a single server, to preserve energy and rack space. Running multiple instances of an application in parallel, e.g. for different users or customers. Each user receives his "own" application instance, with a defined level of service/performance. This prevents that one user's application could hog the entire system and ensures, that each user only has access to his own data set. It also helps to save main memory — if multiple instances of a same process are running, the Linux kernel can share memory pages that are identical and unchanged across all application instances. This also applies to shared libraries that applications may use, they are generally held in memory once and mapped to multiple processes. Quickly creating sandbox environments for development and testing purposes: containers that have been created and configured once can be archived as templates and can be duplicated (cloned) instantly on demand. After finishing the activity, the clone can safely be discarded. This allows to provide repeatable software builds and test environments, because the system will always be reset to its initial state for each run. Linux Containers also boot significantly faster than "classic" virtual machines, which can save a lot of time when running frequent build or test runs on applications. Safe execution of an individual application: if an application running inside a container has been compromised because of a security vulnerability, the host system and other containers remain unaffected. The potential damage can be minimized, analyzed and resolved directly from the host system. Note: Linux Containers on Oracle Linux 6 with the Unbreakable Enterprise Kernel Release 2 (2.6.39) are still marked as Technology Preview - their use is only recommended for testing and evaluation purposes. The Open-Source project "Linux Containers" (LXC) is driving the development of the technology behind this, which is based on the "Control Groups" (CGroups) and "Name Spaces" functionality of the Linux kernel. Oracle is actively involved in the Linux Containers development and contributes patches to the upstream LXC code base. Control Groups provide means to manage and monitor the allocation of resources for individual processes or process groups. Among other things, you can restrict the maximum amount of memory, CPU cycles as well as the disk and network throughput (in MB/s or IOP/s) that are available for an application. Name Spaces help to isolate process groups from each other, e.g. the visibility of other running processes or the exclusive access to a network device. It's also possible to restrict a process group's access and visibility of the entire file system hierarchy (similar to a classic "chroot" environment). CGroups and Name Spaces provide the foundation on which Linux containers are based on, but they can actually be used independently as well. A more detailed description of how Linux Containers can be created and managed on Oracle Linux will be explained in the second part of this article. Additional links related to Linux Containers: OTN Article: The Role of Oracle Solaris Zones and Linux Containers in a Virtualization Strategy Linux Containers on Wikipedia - Lenz Grimmer Follow me on: Personal Blog | Facebook | Twitter | Linux Blog |

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 29 (sys.dm_os_buffer_descriptors)

    - by Tamarick Hill
    The sys.dm_os_buffer_descriptors Dynamic Management View gives you a look into the data pages that are currently in your SQL Server buffer pool. Just in case you are not familiar with some of the internals to SQL Server and how the engine works, SQL Server only works with objects that are in memory (buffer pool). When an object such as a table needs to be read and it does not exist in the buffer pool, SQL Server will read (copy) the necessary data page(s) from disk into the buffer pool and cache it. Caching takes place so that it can be reused again and prevents the need of expensive physical reads. To better illustrate this DMV, lets query it against our AdventureWorks2012 database and view the result set. SELECT * FROM sys.dm_os_buffer_descriptors WHERE database_id = db_id('AdventureWorks2012') The first column returned from this result set is the database_id column which identifies the specific database for a given row. The file_id column represents the file that a particular buffer descriptor belongs to. The page_id column represents the ID for the data page within the buffer. The page_level column represents the index level of the data page. Next we have the allocation_unit_id column which identifies a unique allocation unit. An allocation unit is basically a set of data pages. The page_type column tells us exactly what type of page is in the buffer pool. From my screen shot above you see I have 3 distinct type of Pages in my buffer pool, Index, Data, and IAM pages. Index pages are pages that are used to build the Root and Intermediate levels of a B-Tree. A Data page would represent the actual leaf pages of a clustered index which contain the actual data for the table. Without getting into too much detail, an IAM page is Index Allocation Map page which track GAM (Global Allocation Map) pages which in turn track extents on your system. The row_count column details how many data rows are present on a given page. The free_space_in_bytes tells you how much of a given data page is still available, remember pages are 8K in size. The is_modified signifies whether or not a page has been changed since it has been read into memory, .ie a dirty page. The numa_node column represents the Nonuniform memory access node for the buffer. Lastly is the read_microsec column which tells you how many microseconds it took for a data page to be read (copied) into the buffer pool. This is a great DMV for use when you are tracking down a memory issue or if you just want to have a look at what type of pages are currently in your buffer pool. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms173442.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • International Radio Operators Alphabet in F# &amp; Silverlight &ndash; Part 2

    - by MarkPearl
    So the brunt of my my very complex F# code has been done. Now it’s just putting the Silverlight stuff in. The first thing I did was add a new project to my solution. I gave it a name and VS2010 did the rest of the magic in creating the .Web project etc. In this instance because I want to take the MVVM approach and make use of commanding I have decided to make the frontend a Silverlight4 project. I now need move my F# code into a proper Silverlight Library. Warning – when you create the Silverlight Library VS2010 will ask you whether you want it to be based on Silverlight3 or Silverlight4. I originally went for Silverlight4 only to discover when I tried to compile my solution that I was given an error… Error 12 F# runtime for Silverlight version v4.0 is not installed. Please go to http://go.microsoft.com/fwlink/?LinkId=177463 to download and install matching.. After asking around I discovered that the Silverlight4 F# runtime is not available yet. No problem, the suggestion was to change the F# Silverlight Library to a Silverlight3 project however when going to the properties of the project file – even though I changed it to Silverlight3, VS2010 did not like it and kept reverting it to a Silverlight4 project. After a few minutes of scratching my head I simply deleted Silverlight4 F# Library project and created a new F# Silverlight Library project in Silverlight3 and VS2010 was happy. Now that the project structure is set up, rest is fairly simple. You need to add the Silverlight Library as a reference to the C# Silverlight Front End. Then setup your views, since I was following the MVVM pattern I made a Views & ViewModel folder and set up the relevant View and ViewModels. The MainPageViewModel file looks as follows using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.Collections.ObjectModel; namespace IROAFrontEnd.ViewModels { public class MainPageViewModel : ViewModelBase { private string _iroaString; private string _inputCharacters; public string InputCharacters { get { return _inputCharacters; } set { if (_inputCharacters != value) { _inputCharacters = value; OnPropertyChanged("InputCharacters"); } } } public string IROAString { get { return _iroaString; } set { if (_iroaString != value) { _iroaString = value; OnPropertyChanged("IROAString"); } } } public ICommand MySpecialCommand { get { return new MyCommand(this); } } public class MyCommand : ICommand { readonly MainPageViewModel _myViewModel; public MyCommand(MainPageViewModel myViewModel) { _myViewModel = myViewModel; } public event EventHandler CanExecuteChanged; public bool CanExecute(object parameter) { return true; } public void Execute(object parameter) { var result = ModuleMain.ConvertCharsToStrings(_myViewModel.InputCharacters); var newString = ""; foreach (var Item in result) { newString += Item + " "; } _myViewModel.IROAString = newString.Trim(); } } } } One of the features I like in Silverlight4 is the new commanding. You will notice in my I have put the code under the command execute to reference to my F# module. At the moment this could be cleaned up even more, but will suffice for now.. public void Execute(object parameter) { var result = ModuleMain.ConvertCharsToStrings(_myViewModel.InputCharacters); var newString = ""; foreach (var Item in result) { newString += Item + " "; } _myViewModel.IROAString = newString.Trim(); } I then needed to set the view up. If we have a look at the MainPageView.xaml the xaml code will look like the following…. Nothing to fancy, but battleship grey for now… take careful note of the binding of the command in the button to MySpecialCommand which was created in the ViewModel. <UserControl x:Class="IROAFrontEnd.Views.MainPageView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400"> <Grid x:Name="LayoutRoot" Background="White"> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <TextBox Grid.Row="0" Text="{Binding InputCharacters, Mode=TwoWay}"/> <Button Grid.Row="1" Command="{Binding MySpecialCommand}"> <TextBlock Text="Generate"/> </Button> <TextBlock Grid.Row="2" Text="{Binding IROAString}"/> </Grid> </UserControl> Finally in the App.xaml.cs file we need to set the View and link it to the ViewModel. private void Application_Startup(object sender, StartupEventArgs e) { var myView = new MainPageView(); var myViewModel = new MainPageViewModel(); myView.DataContext = myViewModel; this.RootVisual = myView; }   Once this is done – hey presto – it worked. I typed in some “Test Input” and clicked the generate button and the correct Radio Operators Alphabet was generated. And that’s the end of my first very basic F# Silverlight application.

    Read the article

  • SQL SERVER – Order By Numeric Values Formatted as String

    - by pinaldave
    When I was writing this blog post I had a hard time to come up with the title of the blog post so I did my best to come up with one. Here is the reason why? I wrote a blog post earlier SQL SERVER – Find First Non-Numeric Character from String. One of the questions was that how that blog can be useful in real life scenario. This blog post is the answer to that question. Let us first see a problem. We have a table which has a column containing alphanumeric data. The data always has first as an integer and later part as a string. The business need is to order the data based on the first part of the alphanumeric data which is an integer. Now the problem is that no matter how we use ORDER BY the result is not produced as expected. Let us understand this with example. Prepare a sample data: -- How to find first non numberic character USE tempdb GO CREATE TABLE MyTable (ID INT, Col1 VARCHAR(100)) GO INSERT INTO MyTable (ID, Col1) SELECT 1, '1one' UNION ALL SELECT 2, '11eleven' UNION ALL SELECT 3, '2two' UNION ALL SELECT 4, '22twentytwo' UNION ALL SELECT 5, '111oneeleven' GO -- Select Data SELECT * FROM MyTable GO The above query will give following result set. Now let us use ORDER BY COL1 and observe the result along with Original SELECT. -- Select Data SELECT * FROM MyTable GO -- Select Data SELECT * FROM MyTable ORDER BY Col1 GO The result of the table is not as per expected. We need the result in following format. Here is the good example of how we can use PATINDEX. -- Use of PATINDEX SELECT ID, LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) 'Numeric Character', Col1 'Original Character' FROM MyTable ORDER BY LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) GO We can use PATINDEX to identify the length of the digit part in the alphanumeric string (Remember: Our string has a first part as an int always. It will not work in any other scenario). Now you can use the LEFT function to extract the INT portion from the alphanumeric string and order the data according to it. You can easily clean up the script by dropping following table. DROP TABLE MyTable GO Here is the complete script so you can easily refer it. -- How to find first non numberic character USE tempdb GO CREATE TABLE MyTable (ID INT, Col1 VARCHAR(100)) GO INSERT INTO MyTable (ID, Col1) SELECT 1, '1one' UNION ALL SELECT 2, '11eleven' UNION ALL SELECT 3, '2two' UNION ALL SELECT 4, '22twentytwo' UNION ALL SELECT 5, '111oneeleven' GO -- Select Data SELECT * FROM MyTable GO -- Select Data SELECT * FROM MyTable ORDER BY Col1 GO -- Use of PATINDEX SELECT ID, Col1 'Original Character' FROM MyTable ORDER BY LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) GO DROP TABLE MyTable GO Well, isn’t it an interesting solution. Any suggestion for better solution? Additionally any suggestion for changing the title of this blog post? Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL String, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints

    - by pinaldave
    I earlier wrote about SQL BI Quiz over here and here. The details of the quiz is here: Working with huge data is very common when it is about Data Warehousing. It is necessary to create Cubes on the data to make it meaningful and consumable. There are cases when retrieving the data from cube takes lots of the time. Let us assume that your cube is returning you data very quickly. Suddenly on one day it is returning the data very slowly. What are the three things will you to diagnose this. After diagnose what you will do to resolve performance issue. Participate in my question over here I required BI Expert Jason Thomas to help with few hints to blog readers. He is one of the leading SSAS expert and writes a complicated subject in simple words. If queries were executing properly before but now take a long time to return the data, it means that there has been a change in the environment in which it is running. Some possible changes are listed below:-  1) Data factors:- Compare the data size then and now. Increase in data can result in different execution times. Poorly written queries as well as poor design will not start showing issues till the data grows. How to find it out? (Ans : SQL Server profiler and Perfmon Counters can be used for identifying the issues and performance  tuning the MDX queries)  2) Internal Factors:- Is some slow MDX query / multiple mdx queries running at the same time, which was not running when you had tested it before? Is there any locking happening due to proactive caching or processing operations? Are the measure group caches being cleared by processing operations? (Ans : Again, profiler and perfmon counters will help in finding it out. Load testing can be done using AS Performance Workbench (http://asperfwb.codeplex.com/) by running multiple queries at once)  3) External factors:- Is some other application competing for the same resources?  HINT : Read “Identifying and Resolving MDX Query Performance Bottlenecks in SQL Server 2005 Analysis Services” (http://sqlcat.com/whitepapers/archive/2007/12/16/identifying-and-resolving-mdx-query-performance-bottlenecks-in-sql-server-2005-analysis-services.aspx) Well, these are great tips. Now win big prizes by participate in my question over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Can't remove the libpcap0.8 package

    - by Yogesh
    I am getting error when running apt-get remove root@System:~/Downloads# sudo apt-get remove The following packages have unmet dependencies: libpcap0.8 : Breaks: libpcap0.8:i386 (!= 1.4.0-2) but 1.5.3-2 is installed libpcap0.8:i386 : Breaks: libpcap0.8 (!= 1.5.3-2) but 1.4.0-2 is installed libpcap0.8-dev : Depends: libpcap0.8 (= 1.5.3-2) but 1.4.0-2 is installed E: Unmet dependencies. Try using -f. and when I ran apt-get remove -f this is what happens: root@System:~/Downloads# sudo apt-get remove -f The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads# clear root@System:~/Downloads# sudo apt-get remove -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads# root@System:~/Downloads# sudo apt-get check Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libpcap0.8 : Breaks: libpcap0.8:i386 (!= 1.4.0-2) but 1.5.3-2 is installed libpcap0.8:i386 : Breaks: libpcap0.8 (!= 1.5.3-2) but 1.4.0-2 is installed libpcap0.8-dev : Depends: libpcap0.8 (= 1.5.3-2) but 1.4.0-2 is installed E: Unmet dependencies. Try using -f. root@System:~/Downloads# apt-cache policy libpcap0.8:amd64 libpcap0.8 libpcap0.8-dev libpcap0.8: Installed: 1.4.0-2 Candidate: 1.5.3-2 Version table: 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages *** 1.4.0-2 0 100 /var/lib/dpkg/status libpcap0.8: Installed: 1.4.0-2 Candidate: 1.5.3-2 Version table: 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages *** 1.4.0-2 0 100 /var/lib/dpkg/status libpcap0.8-dev: Installed: 1.5.3-2 Candidate: 1.5.3-2 Version table: *** 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages 100 /var/lib/dpkg/status root@System:~/Downloads# root@System:~/Downloads# sudo apt-get -f remove libpcap0.8 libpcap0.8-dev libpcap0.8-dev:i386 libpcap0.8:i386 Reading package lists... Done Building dependency tree Reading state information... Done Package 'libpcap0.8-dev:i386' is not installed, so not removed. Did you mean 'libpcap0.8-dev'? You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libpcap-dev : Depends: libpcap0.8-dev but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). root@System:~/Downloads# sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads#

    Read the article

  • Architecture for a template-building, WYSIWIG application

    - by Sam Selikoff
    I'm building a WYSIWYG designer in Ember.js. The designer will allow users to create campaigns - think MailChimp. To build a campaign, users will choose an existing template. The template will have a defined layout. The user will then be taken to the designer, where he will be able to edit the text and style, and additionally change some layout options. I've been thinking about how best to go about structuring this app, and there are a few hurdles. Specifically, the output of the campaign will be dynamic: eventually, it will be published somewhere, and when the consumers (not my users, but the people clicking on the campaign that my user created) visit the campaign, certain pieces of data will change, depending on the type of consumer viewing the campaign. That means the ultimate output of the designer will be a dynamic site. The data that is dynamic for this site - the end product - will not be manipulated by the user in the designer. However, the data that will be manipulated by the user in the designer are things like copy, styles, layout options, etc. I'll call the first set of variables server-side data, and the second client-side data. It seems, then, that the process will go something like this: I'll need to create templates for this designer that have two dynamic segments. For instance, the server-side data could be Liquid expressions, and the client-side data Handlebars expressions. When the user creates a campaign, I would compile the template on the back end using some dummy data for the server-side variables, and serve up a handlebars template to the Ember app. The user would then edit the template, and the Ember app would save all his edits to the JS variables that were powering the template. This way he'd be able to preview the template. When he saves, he'll send back the selected template, along with all the data and options he's made. When it comes time to publish, the back-end system will have to do two things: compile the template with Handlebars using the campaign data, and then compile the template with Liquid using the server-side data Is my thinking roughly accurate about this, or is there a simpler way?

    Read the article

  • What's Old is New Again

    - by David Dorf
    Last night I told my son he could stream music to his tablet "from the cloud" (in this case, the Amazon Cloud).  He paused, then said, "what is the cloud?"  I replied, "a bunch of servers connected to the internet."  Apparently he had visions of something much more magnificent.  Another similar term is "big data."  These marketing terms help to quickly convey topics but are oversimplifications that are open to many interpretations.  At their core, those terms a shiny packages holding recycled ideas. I see many headlines declaring big data changes everything, but it doesn't.  Savvy retailers have been dealing with large volumes of data since the electronic cash register was invented.  But the there have a been a few changes to the landscape that make big data a topic of conversation: 1. Computing power has caught up to storage volumes. Its now possible to more thoroughly analyze the copious volumes of data retailers have been squirreling away.  CPUs are faster, sold state drives more plentiful, and new ways to store and search data are available.  My iPhone is more power than the computer used in the Apollo mission to the moon. 2. Unstructured data is everywhere.  The Web used to be where retailers published product information, but now users are generating the bulk of the content in the form of comments, videos, and "likes."  The variety of information available to retailers is huge, and it meaning difficult to discern. 3. Everything is connected.  Looking at a report from my router, there are no less than 20 active devices on my home network.  We can track the location of mobile phones, tag products with RFID, and set our thermostats (I love my Nest) from a thousand miles away.  Not only is there more data, but its arriving at higher velocity. Careful readers will note the three Vs that help define so-called big data: volume, variety, and velocity. We now have more volume, more variety, and more velocity and different technologies to deal with them.  But at the heart, the objectives are still the same: Informed decisions Accurate forecasts Improved optimizations So don't let the term "big data" throw you off the scent.  Retailers still need to execute on the basics.  But do take a fresh look at the data that's available and the new technologies to process it.  The landscape will continue to change and agile organizations will always be reevaluating their approaches.  You can just add some more weapons to the arsenal.

    Read the article

  • c# send recive object over network?

    - by Data-Base
    Hello, I'm working on a server/client project the client will be asking the server for info and the server will send them back to the client the info may be string,number, array, list, arraylist or any other object I found allot of examples but I faced issues!!!! the solution I found so far is to serialize the object (data) and send it then de-serialize it to process here is the server code public void RunServer(string SrvIP,int SrvPort) { try { var ipAd = IPAddress.Parse(SrvIP); /* Initializes the Listener */ if (ipAd != null) { var myList = new TcpListener(ipAd, SrvPort); /* Start Listeneting at the specified port */ myList.Start(); Console.WriteLine("The server is running at port "+SrvPort+"..."); Console.WriteLine("The local End point is :" + myList.LocalEndpoint); Console.WriteLine("Waiting for a connection....."); while (true) { Socket s = myList.AcceptSocket(); Console.WriteLine("Connection accepted from " + s.RemoteEndPoint); var b = new byte[100]; int k = s.Receive(b); Console.WriteLine("Recieved..."); for (int i = 0; i < k; i++) Console.Write(Convert.ToChar(b[i])); string cmd = Encoding.ASCII.GetString(b); if (cmd.Contains("CLOSE-CONNECTION")) break; var asen = new ASCIIEncoding(); // sending text s.Send(asen.GetBytes("The string was received by the server.")); // the line bove to be modified to send serialized object? Console.WriteLine("\nSent Acknowledgement"); s.Close(); Console.ReadLine(); } /* clean up */ myList.Stop(); } } catch (Exception e) { Console.WriteLine("Error..... " + e.StackTrace); } } here is the client code that should return an object public object runClient(string SrvIP, int SrvPort) { object obj = null; try { var tcpclnt = new TcpClient(); Console.WriteLine("Connecting....."); tcpclnt.Connect(SrvIP, SrvPort); // use the ipaddress as in the server program Console.WriteLine("Connected"); Console.Write("Enter the string to be transmitted : "); var str = Console.ReadLine(); Stream stm = tcpclnt.GetStream(); var asen = new ASCIIEncoding(); if (str != null) { var ba = asen.GetBytes(str); Console.WriteLine("Transmitting....."); stm.Write(ba, 0, ba.Length); } var bb = new byte[2000]; var k = stm.Read(bb, 0, bb.Length); string data = null; for (var i = 0; i < k; i++) Console.Write(Convert.ToChar(bb[i])); //convert to object code ?????? Console.ReadLine(); tcpclnt.Close(); } catch (Exception e) { Console.WriteLine("Error..... " + e.StackTrace); } return obj; } I need to know a good serialize/serialize and how to integrate it into the solution above :-( I would be really thankful for any help cheers

    Read the article

  • Apache server still running but user can not connect website, after "sudo apachectl restart" user can connect website, what'r wrong? [on hold]

    - by Tinyfool
    My website is http://ourcoders.com/, recently I found sometime user report can not connect to my website, but I ssh to server, I found Apache still running, like this: root@AY1401261057077842eaZ:~# ps aux|grep apache root 873 0.0 1.3 290496 13528 ? Ss Aug18 0:28 /usr/sbin/apache2 -k start www-data 3490 0.0 1.8 299004 18764 ? S Aug21 0:01 /usr/sbin/apache2 -k start www-data 3612 0.0 1.5 296008 15540 ? S Aug21 0:03 /usr/sbin/apache2 -k start www-data 3860 0.0 1.5 296636 16268 ? S Aug21 0:00 /usr/sbin/apache2 -k start www-data 3913 0.0 1.2 295468 13084 ? S Aug21 0:00 /usr/sbin/apache2 -k start www-data 3931 0.0 1.7 298488 18228 ? S 16:02 0:01 /usr/sbin/apache2 -k start www-data 3938 0.0 1.9 299128 19724 ? S 16:02 0:02 /usr/sbin/apache2 -k start www-data 4465 0.0 1.6 296688 16404 ? S Aug21 0:00 /usr/sbin/apache2 -k start www-data 5075 0.0 1.2 295468 13044 ? S 16:16 0:00 /usr/sbin/apache2 -k start www-data 5153 0.0 1.5 295880 15612 ? S 16:17 0:00 /usr/sbin/apache2 -k start www-data 5770 0.0 1.5 296608 16016 ? S 16:30 0:00 /usr/sbin/apache2 -k start www-data 5773 0.0 1.6 296948 16640 ? S 16:30 0:00 /usr/sbin/apache2 -k start www-data 5816 0.0 1.6 297216 16976 ? S 16:31 0:01 /usr/sbin/apache2 -k start www-data 5918 0.0 1.7 298228 17820 ? S 16:33 0:01 /usr/sbin/apache2 -k start www-data 6023 0.0 1.9 299864 19840 ? S 16:35 0:13 /usr/sbin/apache2 -k start www-data 6073 0.0 1.7 298480 18120 ? S 16:36 0:02 /usr/sbin/apache2 -k start www-data 6088 0.0 2.0 300488 21008 ? S 16:36 0:12 /usr/sbin/apache2 -k start www-data 6114 0.0 1.7 298548 18268 ? S 16:37 0:12 /usr/sbin/apache2 -k start www-data 6134 0.0 1.6 296688 16532 ? S 16:37 0:04 /usr/sbin/apache2 -k start www-data 6193 0.0 1.7 297908 17420 ? S 16:38 0:08 /usr/sbin/apache2 -k start www-data 6821 0.0 1.8 299556 19072 ? S 16:43 0:11 /usr/sbin/apache2 -k start www-data 7058 0.0 1.7 298676 18204 ? S 16:48 0:10 /usr/sbin/apache2 -k start www-data 7065 0.0 1.8 299028 18868 ? S 16:48 0:11 /usr/sbin/apache2 -k start www-data 7084 0.0 1.8 299508 19020 ? S 16:48 0:11 /usr/sbin/apache2 -k start www-data 7221 0.0 1.8 299160 18768 ? S 16:51 0:09 /usr/sbin/apache2 -k start www-data 11453 0.0 1.7 298484 18256 ? S 09:39 0:02 /usr/sbin/apache2 -k start root 26324 0.0 0.0 8084 920 pts/0 S+ 22:52 0:00 grep --color=auto apache root 28517 0.0 0.0 4404 612 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28518 0.0 0.0 4404 616 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28519 0.0 0.0 4404 612 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28520 0.0 0.0 4404 616 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28521 0.0 0.0 4312 552 ? S Aug21 0:00 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28522 0.0 0.0 4308 548 ? S Aug21 0:07 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28523 0.0 0.0 4176 352 ? S Aug21 0:00 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28524 0.0 0.0 4180 356 ? S Aug21 0:00 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log Today's only error log is blow. [Sat Aug 23 22:52:47 2014] [notice] SIGHUP received. Attempting to restart [Sat Aug 23 22:52:47 2014] [notice] Apache/2.2.22 (Ubuntu) PHP/5.3.10-1ubuntu3.13 with Suhosin-Patch configured -- resuming normal operations traffic information: cat access-2014-08-23.log | cut -d " " -f4 |cut -d":" -f2 |sort|uniq -c |sort -nr 5692 14 5291 15 5083 16 4723 23 4463 12 4057 17 4011 11 3926 13 3852 10 3187 05 3176 09 3055 06 2790 07 2672 00 2608 02 2591 01 2577 04 2514 03 2497 08 707 22 88 18 After I use "sudo apachectl restart", user can connect my website. So I want to know? What is the problem? And if "sudo apachectl restart" is needed, can I automate run this command? Today this kind struts appear again, and I run netstat -a -n Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 115.28.146.116:80 125.39.208.120:50708 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.158:50278 SYN_RECV tcp 0 0 115.28.146.116:80 220.173.142.152:23320 SYN_RECV tcp 0 0 115.28.146.116:80 60.173.247.132:52851 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.158:39397 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.158:56894 SYN_RECV tcp 0 0 115.28.146.116:80 183.129.174.2:21291 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.120:44499 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.120:34017 SYN_RECV tcp 0 0 115.28.146.116:80 124.65.50.210:3774 SYN_RECV tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:15770 0.0.0.0:* LISTEN tcp 1 0 115.28.146.116:80 14.127.65.219:61633 CLOSE_WAIT tcp 305 0 115.28.146.116:80 125.39.208.120:37593 ESTABLISHED tcp 0 0 10.144.142.201:52866 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:52873 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:52868 10.146.6.61:3306 TIME_WAIT tcp 343 0 115.28.146.116:80 182.118.20.215:50709 ESTABLISHED tcp 0 0 115.28.146.116:54784 173.194.127.243:80 ESTABLISHED tcp 1 0 115.28.146.116:80 116.192.2.185:41253 CLOSE_WAIT tcp 0 0 10.144.142.201:52876 10.146.6.61:3306 ESTABLISHED tcp 559 0 115.28.146.116:80 218.241.144.114:54501 ESTABLISHED tcp 376 0 115.28.146.116:80 116.213.196.119:50604 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59339 CLOSE_WAIT tcp 214 0 115.28.146.116:80 142.4.215.40:34443 ESTABLISHED tcp 0 0 115.28.146.116:48635 115.28.146.116:80 ESTABLISHED tcp 187 0 115.28.146.116:80 115.28.146.116:48635 ESTABLISHED tcp 0 0 10.144.142.201:52853 10.146.6.61:3306 TIME_WAIT tcp 594 0 115.28.146.116:80 183.129.174.2:7090 CLOSE_WAIT tcp 0 0 10.144.142.201:52874 10.146.6.61:3306 TIME_WAIT tcp 0 0 115.28.146.116:80 182.118.20.166:44081 TIME_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59028 CLOSE_WAIT tcp 1 0 115.28.146.116:80 14.127.65.219:61665 CLOSE_WAIT tcp 0 0 10.144.142.201:52860 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:46983 10.146.6.61:3306 ESTABLISHED tcp 0 2290 115.28.146.116:80 14.154.179.243:41049 FIN_WAIT1 tcp 0 0 10.144.142.201:42900 10.146.6.61:3306 ESTABLISHED tcp 571 0 115.28.146.116:80 220.173.142.152:23295 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59337 CLOSE_WAIT tcp 438 0 115.28.146.116:80 42.120.74.202:31567 CLOSE_WAIT tcp 0 0 115.28.146.116:80 113.36.238.28:59498 ESTABLISHED tcp 259 0 115.28.146.116:80 66.249.65.56:36739 ESTABLISHED tcp 0 0 115.28.146.116:80 113.36.238.28:59341 ESTABLISHED tcp 0 0 115.28.146.116:80 142.4.215.40:34267 FIN_WAIT2 tcp 799 0 115.28.146.116:80 180.173.88.1:52779 ESTABLISHED tcp 0 0 115.28.146.116:80 117.136.25.132:25207 FIN_WAIT2 tcp 0 0 115.28.146.116:80 220.181.108.186:42540 TIME_WAIT tcp 0 0 10.144.142.201:59902 10.242.174.13:80 TIME_WAIT tcp 0 1820 115.28.146.116:80 218.22.140.90:39266 LAST_ACK tcp 0 0 115.28.146.116:80 66.249.65.64:56977 TIME_WAIT tcp 669 0 115.28.146.116:80 83.251.90.61:49664 ESTABLISHED tcp 0 0 10.144.142.201:52872 10.146.6.61:3306 TIME_WAIT tcp 233 0 115.28.146.116:80 54.202.88.0:43398 CLOSE_WAIT tcp 479 0 115.28.146.116:80 65.49.44.149:25739 ESTABLISHED tcp 378 0 115.28.146.116:80 148.251.124.173:39313 CLOSE_WAIT tcp 1 0 115.28.146.116:80 14.127.65.219:61697 CLOSE_WAIT tcp 1 0 115.28.146.116:80 49.4.158.2:52986 CLOSE_WAIT tcp 769 0 115.28.146.116:80 14.127.65.219:61537 ESTABLISHED tcp 0 0 10.144.142.201:52859 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:55734 10.164.2.163:9200 TIME_WAIT tcp 563 0 115.28.146.116:80 202.55.20.10:22577 CLOSE_WAIT tcp 194 0 115.28.146.116:80 37.58.100.165:50908 CLOSE_WAIT tcp 791 0 115.28.146.116:80 116.192.2.185:45628 ESTABLISHED tcp 709 0 115.28.146.116:80 113.116.61.178:65209 ESTABLISHED tcp 706 0 115.28.146.116:80 183.227.44.237:54519 ESTABLISHED tcp 301 0 115.28.146.116:80 118.198.243.127:31180 ESTABLISHED tcp 0 0 10.144.142.201:55721 10.164.2.163:9200 TIME_WAIT tcp 0 0 10.144.142.201:55726 10.164.2.163:9200 TIME_WAIT tcp 0 0 10.144.142.201:55723 10.164.2.163:9200 TIME_WAIT tcp 681 0 115.28.146.116:80 83.251.90.61:49662 ESTABLISHED tcp 0 0 115.28.146.116:80 83.251.90.61:65274 TIME_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59022 CLOSE_WAIT tcp 1 0 115.28.146.116:80 180.173.88.1:52781 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59037 CLOSE_WAIT tcp 0 0 10.144.142.201:55728 10.164.2.163:9200 TIME_WAIT tcp 231 0 115.28.146.116:37596 110.75.102.62:80 CLOSE_WAIT tcp 1 0 115.28.146.116:80 14.127.65.219:61569 CLOSE_WAIT tcp 0 0 10.144.142.201:51310 10.146.6.61:3306 ESTABLISHED tcp 299 0 115.28.146.116:80 123.125.71.16:36281 ESTABLISHED tcp 0 0 115.28.146.116:48620 115.28.146.116:80 ESTABLISHED tcp 1 0 115.28.146.116:80 183.227.44.237:54520 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59026 CLOSE_WAIT tcp 479 0 115.28.146.116:80 65.49.44.149:5490 ESTABLISHED tcp 665 0 115.28.146.116:80 83.251.90.61:49663 ESTABLISHED tcp 0 0 115.28.146.116:53744 173.194.127.147:80 ESTABLISHED tcp 1 0 115.28.146.116:80 113.36.238.28:59023 CLOSE_WAIT tcp 0 0 115.28.146.116:22 116.192.2.185:34205 ESTABLISHED tcp 333 0 115.28.146.116:80 149.174.113.111:54338 CLOSE_WAIT tcp 0 0 10.144.142.201:52861 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:52863 10.146.6.61:3306 TIME_WAIT tcp 1 0 115.28.146.116:80 116.192.2.185:43272 CLOSE_WAIT tcp 767 0 115.28.146.116:80 49.4.158.2:52947 CLOSE_WAIT tcp 668 0 115.28.146.116:80 83.251.90.61:49665 ESTABLISHED tcp 642 0 115.28.146.116:80 222.78.185.50:55788 ESTABLISHED tcp 710 0 115.28.146.116:80 113.116.61.178:65264 ESTABLISHED tcp 284 0 115.28.146.116:80 157.55.39.243:65185 ESTABLISHED tcp 450 0 115.28.146.116:80 65.49.44.149:55496 ESTABLISHED tcp 1 0 115.28.146.116:80 116.192.2.185:36629 CLOSE_WAIT tcp 233 0 115.28.146.116:80 54.202.88.0:42424 CLOSE_WAIT tcp 187 0 115.28.146.116:80 115.28.146.116:48620 ESTABLISHED tcp 1 0 115.28.146.116:80 14.127.65.219:61601 CLOSE_WAIT tcp 776 0 115.28.146.116:80 202.118.253.102:64883 CLOSE_WAIT tcp 841 0 115.28.146.116:80 37.228.105.28:49472 ESTABLISHED tcp 787 0 115.28.146.116:80 112.65.226.198:52192 ESTABLISHED tcp 0 0 10.144.142.201:55717 10.164.2.163:9200 TIME_WAIT tcp 233 0 115.28.146.116:80 54.202.88.0:42855 CLOSE_WAIT tcp 379 0 115.28.146.116:80 101.226.166.219:2322 ESTABLISHED tcp 0 0 115.28.146.116:80 183.60.212.152:43063 CLOSE_WAIT tcp 1 0 115.28.146.116:80 180.173.88.1:52780 CLOSE_WAIT tcp 784 0 115.28.146.116:80 101.95.29.26:63094 ESTABLISHED tcp 463 0 115.28.146.116:80 65.49.44.149:53876 ESTABLISHED tcp 1 0 115.28.146.116:80 116.192.2.185:37946 CLOSE_WAIT tcp 479 0 115.28.146.116:80 65.49.44.149:41157 ESTABLISHED tcp 1 0 115.28.146.116:80 113.36.238.28:59036 CLOSE_WAIT tcp 1 0 115.28.146.116:80 49.4.158.2:52984 CLOSE_WAIT tcp 1 0 115.28.146.116:80 116.192.2.185:38100 CLOSE_WAIT tcp 0 0 10.144.142.201:52865 10.146.6.61:3306 TIME_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59027 CLOSE_WAIT tcp 0 0 115.28.146.116:36508 173.194.127.81:80 ESTABLISHED tcp 210 0 115.28.146.116:80 188.143.232.123:47775 ESTABLISHED tcp 1 0 115.28.146.116:80 113.36.238.28:59025 CLOSE_WAIT tcp 0 0 10.144.142.201:52857 10.146.6.61:3306 TIME_WAIT tcp 654 0 115.28.146.116:80 49.4.158.2:52985 ESTABLISHED tcp 0 0 115.28.146.116:58627 110.75.102.62:80 ESTABLISHED tcp 782 0 115.28.146.116:80 180.153.219.13:40293 ESTABLISHED tcp 792 0 115.28.146.116:80 116.192.2.185:48187 CLOSE_WAIT tcp6 0 0 :::22 :::* LISTEN udp 0 0 115.28.146.116:123 0.0.0.0:* udp 0 0 10.144.142.201:123 0.0.0.0:* udp 0 0 127.0.0.1:123 0.0.0.0:* udp 0 0 0.0.0.0:123 0.0.0.0:* udp6 0 0 :::123 :::* Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 8447 /var/run/mysqld/mysqld.sock unix 2 [ ACC ] SEQPACKET LISTENING 6678 /run/udev/control unix 2 [ ACC ] STREAM LISTENING 6482 @/com/ubuntu/upstart unix 2 [ ACC ] STREAM LISTENING 7543 /var/run/dbus/system_bus_socket unix 7 [ ] DGRAM 7551 /dev/log unix 2 [ ACC ] STREAM LISTENING 7650 /var/run/nscd/socket unix 2 [ ] DGRAM 7156424 unix 3 [ ] STREAM CONNECTED 7156137 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 7156136 unix 2 [ ] DGRAM 7156135 unix 2 [ ] DGRAM 7155834 unix 2 [ ] DGRAM 9734 unix 3 [ ] STREAM CONNECTED 9151 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 9150 unix 3 [ ] STREAM CONNECTED 9136 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 9135 unix 3 [ ] STREAM CONNECTED 9106 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 9105 unix 2 [ ] DGRAM 9073 unix 3 [ ] STREAM CONNECTED 7575 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 7574 unix 3 [ ] STREAM CONNECTED 7565 unix 3 [ ] STREAM CONNECTED 7564 unix 3 [ ] STREAM CONNECTED 7332 @/com/ubuntu/upstart unix 3 [ ] STREAM CONNECTED 7330 unix 3 [ ] DGRAM 6712 unix 3 [ ] DGRAM 6711 unix 3 [ ] STREAM CONNECTED 6662 @/com/ubuntu/upstart unix 3 [ ] STREAM CONNECTED 6635

    Read the article

  • I want to change DPI with Imagemagick without changing the actual byte-size of the image data

    - by user1694803
    I feel so horribly sorry that I have to ask this question here, but after hours of researching how to do an actually very simple task I'm still failing... In Gimp there is a very simple way to do what I want. I only have the German dialog installed but I'll try to translate it. I'm talking about going to "Picture-PrintingSize" and then adjusting the Values "X-Resolution" and "Y-Resolution" which are known to me as so called DPI values. You can also choose the format which by default is "Pixel/Inch". (In German the dialog is "Bild-Druckgröße" and there "X-Auflösung" and "Y-Auflösung") Ok, the values there are often "72" by default. When I change them to e.g. "300" this has the effect that the image stays the same on the computer, but if I print it, it will be smaller if you look at it, but all the details are still there, just smaller - it has a higher resolution on the printed paper (but smaller size... which is fine for me). I am often doing that when I am working with LaTeX, or to be exact with the command "pdflatex" on a recent Ubuntu-Machine. When I'm doing the above process with Gimp manually everything works just fine. The images will appear smaller in the resulting PDF but with high printing quality. What I am trying to do is to automate the process of going into Gimp and adjusting the DPI values. Since Imagemagick is known to be superb and I used it for many other tasks I tried to achieve my goal with this tool. But it does just not do what I want. After trying a lot of things I think this actually is be the command that should be my friend: convert input.png -density 300 output.png This should set the DPI to 300, as I can read everywhere in the web. It seems to work. When I check the file it stays the same. file input.png output.png input.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced output.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced When I use this command, it seems like it did what I wanted: identify -verbose output.png | grep 300 Resolution: 300x300 PNG:pHYs : x_res=300, y_res=300, units=0 (Funny enough, the same output comes for input.png which confuses me... so this might be the wrong parameters to watch?) But when I now render my TeX with "pdflatex" the image is still big and blurry. Also when I open the image with Gimp again the DPI values are set to "72" instead of "300". So there actually was no effect at all. Now what is the problem here. Am I getting something completely wrong? I can't be that wrong since everything works just fine with Gimp... Thanks for any help in this. I am also open to other automated solutions which are easily done on a Linux system...

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • Testing Entity Framework applications, pt. 3: NDbUnit

    - by Thomas Weller
    This is the third of a three part series that deals with the issue of faking test data in the context of a legacy app that was built with Microsoft's Entity Framework (EF) on top of an MS SQL Server database – a scenario that can be found very often. Please read the first part for a description of the sample application, a discussion of some general aspects of unit testing in a database context, and of some more specific aspects of the here discussed EF/MSSQL combination. Lately, I wondered how you would ‘mock’ the data layer of a legacy application, when this data layer is made up of an MS Entity Framework (EF) model in combination with a MS SQL Server database. Originally, this question came up in the context of how you could enable higher-level integration tests (automated UI tests, to be exact) for a legacy application that uses this EF/MSSQL combo as its data store mechanism – a not so uncommon scenario. The question sparked my interest, and I decided to dive into it somewhat deeper. What I've found out is, in short, that it's not very easy and straightforward to do it – but it can be done. The two strategies that are best suited to fit the bill involve using either the (commercial) Typemock Isolator tool or the (free) NDbUnit framework. The use of Typemock was discussed in the previous post, this post now will present the NDbUnit approach... NDbUnit is an Apache 2.0-licensed open-source project, and like so many other Nxxx tools and frameworks, it is basically a C#/.NET port of the corresponding Java version (DbUnit namely). In short, it helps you in flexibly managing the state of a database in that it lets you easily perform basic operations (like e.g. Insert, Delete, Refresh, DeleteAll)  against your database and, most notably, lets you feed it with data from external xml files. Let's have a look at how things can be done with the help of this framework. Preparing the test data Compared to Typemock, using NDbUnit implies a totally different approach to meet our testing needs.  So the here described testing scenario requires an instance of an SQL Server database in operation, and it also means that the Entity Framework model that sits on top of this database is completely unaffected. First things first: For its interactions with the database, NDbUnit relies on a .NET Dataset xsd file. See Step 1 of their Quick Start Guide for a description of how to create one. With this prerequisite in place then, the test fixture's setup code could look something like this: [TestFixture, TestsOn(typeof(PersonRepository))] [Metadata("NDbUnit Quickstart URL",           "http://code.google.com/p/ndbunit/wiki/QuickStartGuide")] [Description("Uses the NDbUnit library to provide test data to a local database.")] public class PersonRepositoryFixture {     #region Constants     private const string XmlSchema = @"..\..\TestData\School.xsd";     #endregion // Constants     #region Fields     private SchoolEntities _schoolContext;     private PersonRepository _personRepository;     private INDbUnitTest _database;     #endregion // Fields     #region Setup/TearDown     [FixtureSetUp]     public void FixtureSetUp()     {         var connectionString = ConfigurationManager.ConnectionStrings["School_Test"].ConnectionString;         _database = new SqlDbUnitTest(connectionString);         _database.ReadXmlSchema(XmlSchema);         var entityConnectionStringBuilder = new EntityConnectionStringBuilder         {             Metadata = "res://*/School.csdl|res://*/School.ssdl|res://*/School.msl",             Provider = "System.Data.SqlClient",             ProviderConnectionString = connectionString         };         _schoolContext = new SchoolEntities(entityConnectionStringBuilder.ConnectionString);         _personRepository = new PersonRepository(this._schoolContext);     }     [FixtureTearDown]     public void FixtureTearDown()     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         _schoolContext.Dispose();     }     ...  As you can see, there is slightly more fixture setup code involved if your tests are using NDbUnit to provide the test data: Because we're dealing with a physical database instance here, we first need to pick up the test-specific connection string from the test assemblies' App.config, then initialize an NDbUnit helper object with this connection along with the provided xsd file, and also set up the SchoolEntities and the PersonRepository instances accordingly. The _database field (an instance of the INdUnitTest interface) will be our single access point to the underlying database: We use it to perform all the required operations against the data store. To have a flexible mechanism to easily insert data into the database, we can write a helper method like this: private void InsertTestData(params string[] dataFileNames) {     _database.PerformDbOperation(DbOperationFlag.DeleteAll);     if (dataFileNames == null)     {         return;     }     try     {         foreach (string fileName in dataFileNames)         {             if (!File.Exists(fileName))             {                 throw new FileNotFoundException(Path.GetFullPath(fileName));             }             _database.ReadXml(fileName);             _database.PerformDbOperation(DbOperationFlag.InsertIdentity);         }     }     catch     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         throw;     } } This lets us easily insert test data from xml files, in any number and in a  controlled order (which is important because we eventually must fulfill referential constraints, or we must account for some other stuff that imposes a specific ordering on data insertion). Again, as with Typemock, I won't go into API details here. - Unfortunately, there isn't too much documentation for NDbUnit anyway, other than the already mentioned Quick Start Guide (and the source code itself, of course) - a not so uncommon problem with smaller Open Source Projects. Last not least, we need to provide the required test data in xml form. A snippet for data from the People table might look like this, for example: <?xml version="1.0" encoding="utf-8" ?> <School xmlns="http://tempuri.org/School.xsd">   <Person>     <PersonID>1</PersonID>     <LastName>Abercrombie</LastName>     <FirstName>Kim</FirstName>     <HireDate>1995-03-11T00:00:00</HireDate>   </Person>   <Person>     <PersonID>2</PersonID>     <LastName>Barzdukas</LastName>     <FirstName>Gytis</FirstName>     <EnrollmentDate>2005-09-01T00:00:00</EnrollmentDate>   </Person>   <Person>     ... You can also have data from various tables in one single xml file, if that's appropriate for you (but beware of the already mentioned ordering issues). It's true that your test assembly may end up with dozens of such xml files, each containing quite a big amount of text data. But because the files are of very low complexity, and with the help of a little bit of Copy/Paste and Excel magic, this appears to be well manageable. Executing some basic tests Here are some of the possible tests that can be written with the above preparations in place: private const string People = @"..\..\TestData\School.People.xml"; ... [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] public void GetNameList_ListOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.List);     Assert.Count(34, names);     Assert.AreEqual("Abercrombie, Kim", names.First());     Assert.AreEqual("Zheng, Roger", names.Last()); } [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] [DependsOn("RemovePerson_CalledOnce_DecreasesCountByOne")] public void GetNameList_NormalOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.Normal);     Assert.Count(34, names);     Assert.AreEqual("Alexandra Walker", names.First());     Assert.AreEqual("Yan Li", names.Last()); } [Test, TestsOn("PersonRepository.AddPerson")] public void AddPerson_CalledOnce_IncreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.AddPerson(new Person { FirstName = "Thomas", LastName = "Weller" });     Assert.AreEqual(count + 1, _personRepository.Count); } [Test, TestsOn("PersonRepository.RemovePerson")] public void RemovePerson_CalledOnce_DecreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.RemovePerson(new Person { PersonID = 33 });     Assert.AreEqual(count - 1, _personRepository.Count); } Not much difference here compared to the corresponding Typemock versions, except that we had to do a bit more preparational work (and also it was harder to get the required knowledge). But this picture changes quite dramatically if we look at some more demanding test cases: Ok, and what if things are becoming somewhat more complex? Tests like the above ones represent the 'easy' scenarios. They may account for the biggest portion of real-world use cases of the application, and they are important to make sure that it is generally sound. But usually, all these nasty little bugs originate from the more complex parts of our code, or they occur when something goes wrong. So, for a testing strategy to be of real practical use, it is especially important to see how easy or difficult it is to mimick a scenario which represents a more complex or exceptional case. The following test, for example, deals with the case that there is some sort of invalid input from the caller: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] [Row(null, typeof(ArgumentNullException))] [Row("", typeof(ArgumentException))] [Row("NotExistingCourse", typeof(ArgumentException))] public void GetCourseMembers_WithGivenVariousInvalidValues_Throws(string courseTitle, Type expectedInnerExceptionType) {     var exception = Assert.Throws<RepositoryException>(() =>                                 _personRepository.GetCourseMembers(courseTitle));     Assert.IsInstanceOfType(expectedInnerExceptionType, exception.InnerException); } Apparently, this test doesn't need an 'Arrange' part at all (see here for the same test with the Typemock tool). It acts just like any other client code, and all the required business logic comes from the database itself. This doesn't always necessarily mean that there is less complexity, but only that the complexity happens in a different part of your test resources (in the xml files namely, where you sometimes have to spend a lot of effort for carefully preparing the required test data). Another example, which relies on an underlying 1-n relationship, might be this: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] public void GetCourseMembers_WhenGivenAnExistingCourse_ReturnsListOfStudents() {     InsertTestData(People, Course, Department, StudentGrade);     List<Person> persons = _personRepository.GetCourseMembers("Macroeconomics");     Assert.Count(4, persons);     Assert.ForAll(         persons,         @p => new[] { 10, 11, 12, 14 }.Contains(@p.PersonID),         "Person has none of the expected IDs."); } If you compare this test to its corresponding Typemock version, you immediately see that the test itself is much simpler, easier to read, and thus much more intention-revealing. The complexity here lies hidden behind the call to the InsertTestData() helper method and the content of the used xml files with the test data. And also note that you might have to provide additional data which are not even directly relevant to your test, but are required only to fulfill some integrity needs of the underlying database. Conclusion The first thing to notice when comparing the NDbUnit approach to its Typemock counterpart obviously deals with performance: Of course, NDbUnit is much slower than Typemock. Technically,  it doesn't even make sense to compare the two tools. But practically, it may well play a role and could or could not be an issue, depending on how much tests you have of this kind, how often you run them, and what role they play in your development cycle. Also, because the dataset from the required xsd file must fully match the database schema (even in parts that otherwise wouldn't be relevant to you), it can be quite cumbersome to be in a team where different people are working with the database in parallel. My personal experience is – as already said in the first part – that Typemock gives you a better development experience in a 'dynamic' scenario (when you're working in some kind of TDD-style, you're oftentimes executing the tests from your dev box, and your database schema changes frequently), whereas the NDbUnit approach is a good and solid solution in more 'static' development scenarios (when you need to execute the tests less frequently or only on a separate build server, and/or the underlying database schema can be kept relatively stable), for example some variations of higher-level integration or User-Acceptance tests. But in any case, opening Entity Framework based applications for testing requires a fair amount of resources, planning, and preparational work – it's definitely not the kind of stuff that you would call 'easy to test'. Hopefully, future versions of EF will take testing concerns into account. Otherwise, I don't see too much of a future for the framework in the long run, even though it's quite popular at the moment... The sample solution A sample solution (VS 2010) with the code from this article series is available via my Bitbucket account from here (Bitbucket is a hosting site for Mercurial repositories. The repositories may also be accessed with the Git and Subversion SCMs - consult the documentation for details. In addition, it is possible to download the solution simply as a zipped archive – via the 'get source' button on the very right.). The solution contains some more tests against the PersonRepository class, which are not shown here. Also, it contains database scripts to create and fill the School sample database. To compile and run, the solution expects the Gallio/MbUnit framework to be installed (which is free and can be downloaded from here), the NDbUnit framework (which is also free and can be downloaded from here), and the Typemock Isolator tool (a fully functional 30day-trial is available here). Moreover, you will need an instance of the Microsoft SQL Server DBMS, and you will have to adapt the connection strings in the test projects App.config files accordingly.

    Read the article

  • CodePlex Daily Summary for Thursday, January 06, 2011

    CodePlex Daily Summary for Thursday, January 06, 2011Popular ReleasesStyleCop for ReSharper: StyleCop for ReSharper 5.1.14980.000: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the correct path ...VivoSocial: VivoSocial 7.4.1: New release with bug fixes and updates for performance.SSH.NET Library: 2011.1.6: Fixes CommandTimeout default value is fixed to infinite. Port Forwarding feature improvements Memory leaks fixes New Features Add ErrorOccurred event to handle errors that occurred on different thread New and improve SFTP features SftpFile now has more attributes and some operations Most standard operations now available Allow specify encoding for command execution KeyboardInteractiveConnectionInfo class added for "keyboard-interactive" authentication. Add ability to specify bo...UltimateJB: Ultimate JB 2.03 PL3 KAKAROTO: Voici une version attendu avec impatience pour beaucoup : - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 2.43 !!! Conclusion : - ultimateJB DEFAULT => Pas de spoof mais disponible pour les PS3 suivantes : 3.41_kiosk 3.41 3.40 3.30 3.21 3.15 3.10 3.01 2.76 2.70 2.60 2.53 2.43.NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.03: Added lot's of new extensions and new projects for MVC and Entity Framework. object.FindTypeByRecursion Int32.InRange String.RemoveAllSpecialCharacters String.IsEmptyOrWhiteSpace String.IsNotEmptyOrWhiteSpace String.IfEmptyOrWhiteSpace String.ToUpperFirstLetter String.GetBytes String.ToTitleCase String.ToPlural DateTime.GetDaysInYear DateTime.GetPeriodOfDay IEnumberable.RemoveAll IEnumberable.Distinct ICollection.RemoveAll IList.Join IList.Match IList.Cast Array.IsNullOrEmpty Array.W...VidCoder: 0.8.0: Added x64 version. Made the audio output preview more detailed and accurate. If the chosen encoder or mixdown is incompatible with the source, the fallback that will be used is displayed. Added "Auto" to the audio mixdown choices. Reworked non-anamorphic size calculation to work better with non-standard pixel aspect ratios and cropping. Reworked Custom anamorphic to be more intuitive and allow display width to be set automatically (Thanks, Statick). Allowing higher bitrates for 6-ch....NET Voice Recorder: Auto-Tune Release: This is the source code and binaries to accompany the article on the Coding 4 Fun website. It is the Auto Tuner release of the .NET Voice Recorder application.BloodSim: BloodSim - 1.3.2.0: - Simulation Log is now automatically disabled and hidden when running 10 or more iterations - Hit and Expertise are now entered by Rating, and include option for a Racial Expertise bonus - Added option for boss to use a periodic magic ability (Dragon Breath) - Added option for boss to periodically Enrage, gaining a Damage/Attack Speed buffASP.NET MVC CMS ( Using CommonLibrary.NET ): CommonLibrary.NET CMS 0.9.5 Alpha: CommonLibrary CMSA simple yet powerful CMS system in ASP.NET MVC 2 using C# 4.0. ActiveRecord based components for Blogs, Widgets, Pages, Parts, Events, Feedback, BlogRolls, Links Includes several widgets ( tag cloud, archives, recent, user cloud, links twitter, blog roll and more ) Built using the http://commonlibrarynet.codeplex.com framework. ( Uses TDD, DDD, Models/Entities, Code Generation ) Can run w/ In-Memory Repositories or Sql Server Database See Documentation tab for Ins...EnhSim: EnhSim 2.2.9 BETA: 2.2.9 BETAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in the Gobl...xUnit.net - Unit Testing for .NET: xUnit.net 1.7 Beta: xUnit.net release 1.7 betaBuild #1533 Important notes for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. This release adds the following new features: Added support for ASP.NET MVC 3 Added Assert.Equal(double expected, double actual, int precision)...Json.NET: Json.NET 4.0 Release 1: New feature - Added Windows Phone 7 project New feature - Added dynamic support to LINQ to JSON New feature - Added dynamic support to serializer New feature - Added INotifyCollectionChanged to JContainer in .NET 4 build New feature - Added ReadAsDateTimeOffset to JsonReader New feature - Added ReadAsDecimal to JsonReader New feature - Added covariance to IJEnumerable type parameter New feature - Added XmlSerializer style Specified property support New feature - Added ...DbDocument: DbDoc Initial Version: DbDoc Initial versionASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.2: Atomic CMS 2.1.2 release notes Atomic CMS installation guide N2 CMS: 2.1: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) All-round improvements and bugfixes File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the template packs (above) and open the proj...Wii Backup Fusion: Wii Backup Fusion 1.0: - Norwegian translation - French translation - German translation - WBFS dump for analysis - Scalable full HQ cover - Support for log file - Load game images improved - Support for image splitting - Diff for images after transfer - Support for scrubbing modes - Search functionality for log - Recurse depth for Files/Load - Show progress while downloading game cover - Supports more databases for cover download - Game cover loading routines improvedAutoLoL: AutoLoL v1.5.1: Fix: Fixed a bug where pressing Save As would not select the Mastery Directory by default Unexpected errors are now always reported to the user before closing AutoLoL down.* Extracted champion data to Data directory** Added disclaimer to notify users this application has nothing to do with Riot Games Inc. Updated Codeplex image * An error report will be shown to the user which can help the developers to find out what caused the error, this should improve support ** We are working on ...TortoiseHg: TortoiseHg 1.1.8: TortoiseHg 1.1.8 is a minor bug fix release, with minor improvementsBlogEngine.NET: BlogEngine.NET 2.0: Get DotNetBlogEngine for 3 Months Free! Click Here for More Info 3 Months FREE – BlogEngine.NET Hosting – Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. To get started, be sure to check out our installatio...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.6 Released: Hi, Today we are releasing final version of Visifire, v3.6.6 with the following new feature: * TextDecorations property is implemented in Title for Chart. * TitleTextDecorations property is implemented in Axis. * MinPointHeight property is now applicable for Column and Bar Charts. Also this release includes few bug fixes: * ToolTipText property of DataSeries was not getting applied from Style. * Chart threw exception if IndicatorEnabled property was set to true and Too...New Projects.NET Framework Extensions Packages: Lightweight NuGet packages with reusable source code extending core .NET functionality, typically in self-contained source files added to your projects as internal classes that can be easily kept up-to-date with NuGet..NET Random Mock Extensions: .NET Random Mock Extensions allow to generate by 1 line of code object implementing any interface or class and fill its properties with random values. This can be usefull for generating test data objects for View or unit testing while you have no real domain object model.ancc: anccASP.NET Social Controls: ASP.NET Social Controls is a small collection of server controls designed to make integrating social sharing utilities such as ShareThis, AddThis and AddToAny easier, more manageable, and X/HTML-compliant, with configuration files and per-instance settings.Autofac for WindowsPhone7: This project hosts the releases for Autofac built for WindowsPhone7AutoSensitivity: AutoSensitivity allows you to define different mouse sensitivities (speeds) for your tocuhpad and mouse and automatically switch between them (based on mouse connect / disconnect).BaseCode: basecodeCaliburn Micro Silverlight Navigation: Caliburn Micro Silverlight Navigation adds navigation to Caliburn Micro UI Framework by applying the ViewModel-First principle. Debian 5 Agent for System Center Operations Manager 2007 R2: Debian 5 System Center Operations Manager 2007 R2 Agent. Debian 5 Management Pack For System Center Operations Manager 2007 R2: Debian 5 Management Pack for SCOM 2007 R2. It will be useless without the Agent (in another project).Eventbrite Helper for WebMatrix: The Eventbrite Helper for WebMatrix makes it simple to promote your Eventbrite events in your WebMatrix site. With a few lines of code you will be able to display your events on your web site with integration with Windows Live Calendar and Google Calendar.Eye Check: EyeCheck is an eye health testing project. It contains a set of tests to examine eye health. It's developed in C# using the Silverlight technology.Hooly Search: This ASP.NET project lets you browse through and search text within holy booksIssueVision.ST: A Silverlight LOB sample using Self-tracking Entities, WCF Services, WIF, MVVM Light toolkit, MEF, and T4 Templates.Lawyer Officer: Projeto desenvolvido como meu trabalho de conclusão de curso para formação em bacharelado em sistemas da informação da FATEF-São VicenteLINQtoROOT: Translates LINQ queries from the .NET world in to CERN's ROOT language (C++) and then runs them (locally or on a PROOF server).OA: ??????????Open Manuscript System: Open Manuscript Systems (OMS) is a research journal management and publishing system with manuscript tracking that has been developed in this project to expand and improve access to research.ProjectCNPM_Vinhlt_Teacher: Ðây là b?n CNPM demo c?a nhóm 6,K52a3 HUS VN. b?n demo này cung là project dâu ti?n tri?n khai phát tri?n th? nghi?m trên mô hình m?ng - Nhi?u member cùng phát tri?n cùng lúc QuanLyNhanKhau: WPF test.RazorPad: RazorPad is a quick and simple stand-alone editing environment that allows anyone (even non-developers) to author Razor templates. It is developed in WPF using C# and relies on the System.WebPages.Razor libraries (included in the project download). Rovio Tour Guide: More details to follow soon....long story short building a robotic tour guide using the Rovio roving webcam platform for proof of concept.ScrumPilot: ScrumPilot is a viewer of events coming from Team Foundation Server The main goal of this project is to help team to follow in real time the Checkins and WorkItems changing. Team can do comments to each event and they can preview some TFS artifacts.S-DMS: S-DMS?????????(Document Manage System)Sharepoint Documentation Generator: New MOSS feature to automatically generate documentation/tables for fields, content types, lists, users, etc...ShengjieGao's projects: ?????Stylish DOS Box: Since the introduction of Windows 3.11 I am trying to avoid the DOS box and use any applet provided with GUI in Windows system. Yet, I realize that there is no week passed by without me opening the DOS box! This project will give the DOS Box a new look.Table2DTO: Auto generate code to build objects (DTOs, Models, etc) from a data table.Techweb: Alon's and Simon's 236607 homework assignments.TLC5940 Driver for Netduino: An Netduino Library for the TI TLC5940 16-Channel PWM Chip. Tratando Exceptions da Send Port na Orchestration: Quando a Send Port é do tipo Request-Response manipular o exception é intuitivo, já que basta colocar um escopo e adicionar um exception do tipo System.Exception. Mas quando a porta é one-way a coisa complica um pouco.UAC Runner: UAC Runner is a small application which allows the running of applications as an administrator from the command line using Windows UAC.Ubuntu 10 Agent for System Center Operations Manager 2007 R2: Ubuntu 10 System Center Operations Manager 2007 R2 Agent.Ubuntu 10 Management Pack For System Center Operations Manager 2007 R2: Ubuntu 10 Management Pack for SCOM 2007 R2. It will be useless without the Agent (in another project). It is based on Red Hat 5 Management Pack. See the Download section to download the MPs and the source files (XML) Whe Online Storage: Whe Online Storage, is an 3. party online storage system and tools for free source. C#, .NET 4.0, SilverlightWindows Phone MVP: An MVP implementation for Windows Phone.

    Read the article

  • Project Management Helps AmeriCares Deliver International Aid

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Alison Weiss Handle with Care Sound project management helps AmeriCares bring international aid to those in need. The stakes are always high for AmeriCares. On a mission to restore health and save lives during times of disaster, the nonprofit international relief and humanitarian aid organization delivers donated medicines, medical supplies, and humanitarian aid to people in the U.S. and around the globe. Founded in 1982 with the express mission of responding as quickly and efficiently as possible to help people in need, the Stamford, Connecticut-based AmeriCares has delivered more than US$10.5 billion in aid to 147 countries over the past three decades. Launch the Slideshow “It’s critically important to us that we steward all the donations and that the medical supplies and medicines get to people as quickly as possible with no loss,” says Kate Sears, senior vice president for finance and technology at AmeriCares. “Whether we’re shipping IV solutions to victims of cholera in Haiti or antibiotics to Somali famine victims, we need to get the medicines there sooner because it means more people will be helped and lives improved or even saved.” Ten years ago, the tracking systems used by AmeriCares associates were paper-based. In recent years, staff started using spreadsheets, but the tracking processes were not standardized between teams. “Every team was tracking completely different information,” says Megan McDermott, senior associate, Sub-Saharan Africa partnerships, at AmeriCares. “It was just a few key things. For example, we tracked the date a shipment was supposed to arrive and the date we got reports from our partner that a hospital received aid on their end.” While the data was accurate, much detail was being lost in the process. AmeriCares management knew it could do a better job of tracking this enterprise data and in 2011 took a significant step by implementing Oracle’s Primavera P6 Professional Project Management. “It’s a comprehensive solution that has helped us improve the monitoring and controlling processes. It has allowed us to do our distribution better,” says Sears. In addition, the implementation effort has been a change agent, helping AmeriCares leadership rethink project management across the entire organization. Initially, much of the focus was on standardizing processes, but staff members also learned the importance of thinking proactively to prevent possible problems and evaluating results to determine if goals and objectives are truly being met. Such data about process efficiency and overall results is critical not only to AmeriCares staff but also to the donors supporting the organization’s life-saving missions. Efficiency Saves Lives One of AmeriCares’ core operations is to gather product donations from the private sector, establish where the most-urgent needs are, and solicit monetary support to send the aid via ocean cargo or airlift to welfare- and health-oriented nongovernmental organizations, hospitals, health networks, and government ministries based in areas in need. In 2011 alone, AmeriCares sent more than 3,500 shipments to 95 countries in response to both ongoing humanitarian needs and more than two dozen emergencies, including deadly tornadoes and storms in the U.S. and the devastating tsunami in Japan. When it comes to nonprofits in general, donors want to know that the charitable organizations they support are using funds wisely. Typically, nonprofits are evaluated by donors in terms of efficiency, an area where AmeriCares has an excellent reputation: 98 percent of expenses go directly to supporting programs and less than 2 percent represent administrative and fundraising costs. Donors, however, should look at more than simple efficiency, says Peter York, senior partner and chief research and learning officer at TCC Group, a nonprofit consultancy headquartered in New York, New York. They should also look at whether organizations have the systems in place to sustain their missions and continue to thrive. An expert on nonprofit organizational management, York has spent years studying sustainable charitable organizations. He defines them as nonprofits that are able to achieve the ongoing financial support to stay relevant and continue doing core mission work. In his analysis of well over 2,500 larger nonprofits, York has found that many are not sustaining, and are actually scaling back in size. “One of the biggest challenges of nonprofit sustainability is the general public’s perception that every dollar donated has to go only to the delivery of service,” says York. “What our data shows is that there are some fundamental capacities that have to be there in order for organizations to sustain and grow.” York’s research highlights the importance of data-driven leadership at successful nonprofits. “You’ve got to have the tools, the systems, and the technologies to get objective information on what you do, the people you serve, and the results you’re achieving,” says York. “If leaders don’t have the knowledge and the data, they can’t make the strategic decisions about programs to take organizations to the next level.” Historically, AmeriCares associates have used time-tested and cost-effective strategies to ship and then track supplies from donation to delivery to their destinations in designated time frames. When disaster strikes, AmeriCares ships by air and generally pulls out all the stops to deliver the most urgently needed aid within the first few days and weeks. Then, as situations stabilize, AmeriCares turns to delivering sea containers for the postemergency and ongoing aid so often needed over the long term. According to McDermott, getting a shipment out the door is fairly complicated, requiring as many as five different AmeriCares teams collaborating together. The entire process can take months—from when products are received in the warehouse and deciding which recipients to allocate supplies to, to getting customs and governmental approvals in place, actually shipping products, and finally ensuring that the products are received in-country. Delivering that aid is no small affair. “Our volume exceeds half a billion dollars a year worth of donated medicines and medical supplies, so it’s a sizable logistical operation to bring these products in and get them out to the right place quickly to have the most impact,” says Sears. “We really pride ourselves on our controls and efficiencies.” Adding to that complexity is the fact that the longer it takes to deliver aid, the more dire the human need can be. Any time AmeriCares associates can shave off the complicated aid delivery process can translate into lives saved. “It’s really being able to track information consistently that will help us to see where are the bottlenecks and where can we work on improving our processes,” says McDermott. Setting a Standard Productivity and information management improvements were key objectives for AmeriCares when staff began the process of implementing Oracle’s Primavera solution. But before configuring the software, the staff needed to take the time to analyze the systems already in place. According to Greg Loop, manager of database systems at AmeriCares, the organization received guidance from several consultants, including Rich D’Addario, consulting project manager in the Primavera Global Business Unit at Oracle, who was instrumental in shepherding the critical requirements-gathering phase. D’Addario encouraged staff to begin documenting shipping processes by considering the order in which activities occur and which ones are dependent on others to get accomplished. This exercise helped everyone realize that to be more efficient, they needed to keep track of shipments in a more standard way. “The staff didn’t recognize formal project management methodology,” says D’Addario. “But they did understand what the most important things are and that if they go wrong, an entire project can go off course.” Before, if a boatload of supplies was being sent to Haiti and there was a problem somewhere, a lot of time was taken up finding out where the problem was—because staff was not tracking things in a standard way. As a result, even more time was needed to find possible solutions to the problem and alert recipients that the aid might be delayed. “For everyone to put on the project manager hat and standardize the way every single thing is done means that now the whole organization is on the same page as to what needs to occur from the time a hurricane hits Haiti and when a boat pulls in to unload supplies,” says D’Addario. With so much care taken to put a process foundation firmly in place, configuring the Primavera solution was actually quite simple. Specific templates were set up for different types of shipments, and dashboards were implemented to provide executives with clear overviews of every project in the system. AmeriCares’ Loop reports that system planning, refining, and testing, followed by writing up documentation and training, took approximately four months. The system went live in spring 2011 at AmeriCares’ Connecticut headquarters. While the nonprofit has an international presence, with warehouses in Europe and offices in Haiti, India, Japan, and Sri Lanka, most donated medicines come from U.S. entities and are shipped from the U.S. out to the rest of the world. In addition, all shipments are tracked from the U.S. office. AmeriCares doesn’t expect the Primavera system to take months off the shipping time, especially for sea containers. However, any time saved is still important because it will allow aid to be delivered to people more quickly at a lower overall cost. “If we can trim a day or two here or there, that can translate into lives that we’re saving, especially in emergency situations,” says Sears. A Cultural Change Beyond the measurable benefits that come with IT-driven process improvement, AmeriCares management is seeing a change in culture as a result of the Primavera project. One change has been treating every shipment of aid as a project, and everyone involved with facilitating shipments as a project manager. “This is a revolutionary concept for us,” says McDermott. “Before, we were used to thinking we were doing logistics—getting a container from point A to point B without looking at it as one project and really understanding what it meant to manage it.” AmeriCares staff is also happy to report that collaboration within the organization is much more efficient. When someone creates a shipment in the Primavera system, the same shared template is used, which means anyone can log in to the system to see the status of a shipment. Knowledgeable staff can access a shipment project to help troubleshoot a problem. Management can easily check the status of projects across the organization. “Dashboards are really useful,” says McDermott. “Instead of going into the details of each project, you can just see the high-level real-time information at a glance.” The new system is helping team members focus on proactively managing shipments rather than simply reacting when problems occur. For example, when a container is shipped, documents must be included for customs clearance. Now, the shipping template has built-in reminders to prompt team members to ask for copies of these documents from freight forwarders and to follow up with partners to discover if a shipment is on time. In the past, staff may not have worked on securing these documents until they’d been notified a shipment had arrived in-country. Another benefit of capturing and adopting best practices within the Primavera system is that staff training is easier. “Capturing the processes in documented steps and milestones allows us to teach new staff members how to do their jobs faster,” says Sears. “It provides them with the knowledge of their predecessors so they don’t have to keep reinventing the wheel.” With the Primavera system already generating positive results, management is eager to take advantage of advanced capabilities. Loop is working on integrating the company’s proprietary inventory management system with the Primavera system so that when logistics or warehousing operators input data, the information will automatically go into the Primavera system. In the past, this information had to be manually keyed into spreadsheets, often leading to errors. Mining Historical Data Another feature on the horizon for AmeriCares is utilizing Primavera P6 Professional Project Management reporting capabilities. As the system begins to include more historical data, management soon will be able to draw on this information to conduct analysis that has not been possible before and create customized reports. For example, at the beginning of the shipment process, staff will be able to use historical data to more accurately estimate how long the approval process should take for a particular country. This could help ensure that food and medicine with limited shelf lives do not get stuck in customs or used beyond their expiration dates. The historical data in the Primavera system will also help AmeriCares with better planning year to year. The nonprofit’s staff has always put together a plan at the beginning of the year, but this has been very challenging simply because it is impossible to predict disasters. Now, management will be able to look at historical data and see trends and statistics as they set current objectives and prepare for future need. In addition, this historical data will provide AmeriCares management with the ability to review year-end data and compare actual project results with goals set at the beginning of the year—to see if desired outcomes were achieved and if there are areas that need improvement. It’s this type of information that is so valuable to donors. And, according to York, project management software can play a critical role in generating the data to help nonprofits sustain and grow. “It is important to invest in systems to help replicate, expand, and deliver services,” says York. “Project management software can help because it encourages nonprofits to examine program or service changes and how to manage moving forward.” Sears believes that AmeriCares donors will support the return on investment the organization will achieve with the Primavera solution. “It won’t be financial returns, but rather how many more people we can help for a given dollar or how much more quickly we can respond to a need,” says Sears. “I think donors are receptive to such arguments.” And for AmeriCares, it is all about the future and increasing results. The project management environment currently may be quite simple, but IT staff plans to expand the complexity and functionality as the organization grows in its knowledge of project management and the goals it wants to achieve. “As we use the system over time, we’ll continue to refine our best practices and accumulate more data,” says Sears. “It will advance our ability to make better data-driven decisions.”

    Read the article

  • jqGrid (Delete row) - How to send additional POST data???

    - by ronanray
    Hi experts, I'm having problem with jqGrid delete mechanism as it only send "oper" and "id" parameters in form of POST data (id is the primary key of the table). The problem is, I need to delete a row based on the id and another column value, let's say user_id. How to add this user_id to the POST data??? I can summarize the issue as the following: How to get the cell value (user_id) of the selected row? AND, how to add that user_id to the POST data so it can be retrieved from the code behind where the actual delete process takes place. Sample codes: jQuery("#tags").jqGrid({ url: "subgrid.process.php, editurl: "subgrid.process.php?, datatype: "json", mtype: "POST", colNames:['id','user_id','status_type_id'], colModel:[{name:'id', index:'id', width:100, editable:true}, {name:'user_id', index:'user_id', width:200, editable:true}, {name:'status_type_id', index:'status_type_id', width:200} ], pager: '#pagernav2', rowNum:10, rowList:[10,20,30,40,50,100], sortname: 'id', sortorder: "asc", caption: "Test", height: 200 }); jQuery("#tags").jqGrid('navGrid','#pagernav2', {add:true,edit:false,del:true,search:false}, {}, {mtype:"POST",closeAfterAdd:true,reloadAfterSubmit:true}, // add options {mtype:"POST",reloadAfterSubmit:true}, // del options {} // search options ); Help....

    Read the article

  • Implementing the double-click event on Silverlight 4 Datagrid

    - by Mohammed Mudassir Azeemi
    Any good soul have an example of implementing the "Command Pattern" introduced by Prism on "Double-click event" of Silverlight 4.0 DataGrid. I did try the following: <data:DataGrid x:Name="dgUserRoles" AutoGenerateColumns="False" Margin="0" Grid.Row="0" ItemsSource="{Binding Path=SelectedUser.UserRoles}" IsReadOnly="False" > <data:DataGrid.Columns> <data:DataGridTemplateColumn Header=" "> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Button Width="20" Height="20" Click="Button_Click" Command="{Binding EditRoleClickedCommand}" CommandParameter="{Binding SelectedRole}" > </Button> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> <data:DataGridTextColumn Header="Role Name" Binding="{Binding RoleName}" /> <data:DataGridTextColumn Header="Role Code" Binding="{Binding UserroleCode}" IsReadOnly="True"/> <data:DataGridCheckBoxColumn Header="UDFM Managed" Binding="{Binding RoleIsManaged}" IsReadOnly="True" /> <data:DataGridCheckBoxColumn Header="UDFM Role Assigned" Binding="{Binding UserroleIsUdfmRoleAssignment}" IsReadOnly="True" /> <data:DataGridTextColumn Header="Source User" Binding="{Binding SourceUser}" IsReadOnly="True" /> </data:DataGrid.Columns> </data:DataGrid> As you see I did try to hook up the Command there and it is not firing the event in my View Model. Looking for a good alternative.

    Read the article

  • jqGrid local data manipulation; problem with row ids when deleting and adding new rows

    - by Sam
    I'm using jqGrid as a client side grid input, allowing the user to input multiple records before POSTing all the data back at once. I'm having a problem where if the user has added a few records (say 3 ) the id's for the records will be 1,2,3. if the user deletes record 2, you're left with 1 and 3 for the id of the records. When the user now adds a new records, jqGrid assigns that records the id 3 again since it just seems to count the total records and increments it by one for the new record. This causes problems when selecting rows as now the row id's are 1, 3 and 3. Does anyone know how to access the row ids of records as I could probably use the afterSubmit event and reassign the row id's increasing from 1. ( so after i delete row id 2, this will set the other row id's to 1 and 2) Any other suggestions to solve this problem? Thanks edit I've solved this with the following code for the delete navGrid button }).navGrid('#pager', {add:true, del:true, refresh:false, search:false}, { ... }, ##edit parameters { ... }, ##add parameters {reloadAfterSubmit:false, clearAfterAdd:false, afterComplete: function () { ## clear and readd the row data so the row ids are sequential var savedData= $("#inputgrid").jqGrid('getRowData'); $("#inputgrid").jqGrid('clearGridData'); $("#inputgrid").jqGrid('addRowData', 'rn', savedData); } } ##delete parameters ); Basically just saving the grid data and then re-adding it so that the rowids are sequential again. For some reason it causes the row numbers down the left side to go start from 2 instead of one. Edit this was solved by using the latest jqGrid code in GitHub (27th April 2010)

    Read the article

  • how to access(read/write) local file system from webkit/javascript?

    - by ganapati hegde
    Hi, i am using Webkitgtk for rendering my HTML pages.Now,say i am browsing the page, i select some text while reading,i want to save/write down the selected text on my local file say /home/localfile.txt. Is any way to access(read/write) local file system using webkit? In case of firefox, i can do like below. try { netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect"); } catch (e) { alert("Permission to save file was denied."); } var file = Components.classes["@mozilla.org/file/local;1"] .createInstance(Components.interfaces.nsILocalFile); file.initWithPath( "/home/localfile.txt" ); if ( file.exists() == false ) { alert( "Creating file... " ); file.create( Components.interfaces.nsIFile.NORMAL_FILE_TYPE, 420 ); } var outputStream = Components.classes["@mozilla.org/network/file-output-stream;1"] .createInstance( Components.interfaces.nsIFileOutputStream ); outputStream.init( file, 0x04 | 0x08 | 0x20, 420, 0 ); var output = "my data to be written to the local file"; var result = outputStream.write( output, output.length ); outputStream.close(); The above code is written in a javascript file and the js file is linked to the html file.When some text is selected,the js function will be called and action will be taken place.How can i achieve the same result using Webkit? Thanks...

    Read the article

  • How do I get SSIS Data Flow to put '0.00' in a flat file?

    - by theog
    I have an SSIS package with a Data Flow that takes an ADO.NET data source (just a small table), executes a select * query, and outputs the query results to a flat file (I've also tried just pulling the whole table and not using a SQL select). The problem is that the data source pulls a column that is a Money datatype, and if the value is not zero, it comes into the text flat file just fine (like '123.45'), but when the value is zero, it shows up in the destination flat file as '.00'. I need to know how to get the leading zero back into the flat file. I've tried various datatypes for the output (in the Flat File Connection Manager), including currency and string, but this seems to have no effect. I've tried a case statement in my select, like this: CASE WHEN columnValue = 0 THEN '0.00' ELSE columnValue END (still results in '.00') I've tried variations on that like this: CASE WHEN columnValue = 0 THEN convert(decimal(12,2), '0.00') ELSE convert(decimal(12,2), columnValue) END (Still results in '.00') and: CASE WHEN columnValue = 0 THEN convert(money, '0.00') ELSE convert(money, columnValue) END (results in '.0000000000000000000') This silly little issue is killin' me. Can anybody tell me how to get a zero Money datatype database value into a flat file as '0.00'?

    Read the article

  • Unit Testing.... a data provider ?

    - by TomTom
    Given problem: I like unit tests. I develop connectivity software to external systems that pretty much and often use a C++ library The return of this systems is nonndeterministic. Data is received while running, but making sure it is all correctly interpreted is hard. How can I test this properly? I can run a unit test that does a connect. Sadly, it will then process a life data stream. I can say I run the test for 30 or 60 seconds before disconnecting, but getting code ccoverage is impossible - I simply dont even comeclose to get all code paths EVERY ONCE PER DAY (error code paths are rarely run). I also can not really assert every result. Depending on the time of the day we talk of 20.000 data callbacks per second - all of which are not relly determined good enough to validate each of them for consistency. Mocking? Well, that would leave me testing an empty shell of myself because the code handling the events basically is the to be tested case, and in many cases we talk here of a COMPLEX c level structure - hard to have mocking frameworks that integrate from Csharp to C++ Anyone any idea? I am short on giving up using unit tests for this part of the application.

    Read the article

  • jstree will not fire onchange event

    - by vasion
    i have been really stuck on this. this is the code: js: var treeoptions={"data":{"type":"json","opts":{"url":"\/surveytags\/treejson"}}}; $('#treecontainer').tree(treeoptions); $("#treecontainer").tree({ callback : { ondblclk : function (node, tree) { alert(node.id); }, onmove : function (node,ref,type){ data= new Object(); data.node= new Object(); data.node.id = node.id; data.ref=new Object(); data.ref.id = ref.id; data.type = type; moveitem(data); }, onchange : function (){ alert('focused'); }, oncreate : function(node){ alert('create'); alert(node.data); } } }); this is the json: {"attributes":{"id":"1"},"data":{"title":"root"},"children":[{"attributes":{"id":"2"},"data":{"title":"blah"},"children":[{"attributes":{"id":"3"},"data":{"title":"tworows down"}},{"attributes":{"id":"4"},"data":{"title":"tooope"}}]}]} it loads. other events fire. BUT onchange will not...

    Read the article

  • System.AccessViolationException: Attempted to read or write protected memory.

    - by Ananth
    I get the following exception when I try to "find and replace" in a word 2007 working on Windows Vista , Windows 7. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at Microsoft.Office.Interop.Word.Find.Execute(Object& FindText, Object& MatchCase, Object& MatchWholeWord, Object& MatchWildcards, Object& MatchSoundsLike, Object& MatchAllWordForms, Object& Forward, Object& Wrap, Object& Format, Object& ReplaceWith, Object& Replace, Object& MatchKashida, Object& MatchDiacritics, Object& MatchAlefHamza, Object& MatchControl) Is there any solution for this ? Iam using .net3.5 C#.

    Read the article

  • N-Tier Architecture - Structure with multiple projects in VB.NET

    - by focus.nz
    I would like some advice on the best approach to use in the following situation... I will have a Windows Application and a Web Application (presentation layers), these will both access a common business layer. The business layer will look at a configuration file to find the name of the dll (data layer) which it will create a reference to at runtime (is this the best approach?). The reason for creating the reference at runtime to the data access layer is because the application will interface with a different 3rd party accounting system depending on what the client is using. So I would have a separate data access layer to support each accounting system. These could be separate setup projects, each client would use one or the other, they wouldn't need to switch between the two. Projects: MyCompany.Common.dll - Contains interfaces, all other projects have a reference to this one. MyCompany.Windows.dll - Windows Forms Project, references MyCompany.Business.dll MyCompany.Web.dll - Website project, references MyCompany.Business.dll MyCompany.Busniess.dll - Business Layer, references MyCompany.Data.* (at runtime) MyCompany.Data.AccountingSys1.dll - Data layer for accounting system 1 MyCompany.Data.AccountingSys2.dll - Data layer for accounting system 2 The project MyCompany.Common.dll would contain all the interfaces, each other project would have a reference to this one. Public Interface ICompany ReadOnly Property Id() as Integer Property Name() as String Sub Save() End Interface Public Interface ICompanyFactory Function CreateCompany() as ICompany End Interface The project MyCompany.Data.AccountingSys1.dll and MyCompany.Data.AccountingSys2.dll would contain the classes like the following: Public Class Company Implements ICompany Protected _id As Integer Protected _name As String Public ReadOnly Property Id As Integer Implements MyCompany.Common.ICompany.Id Get Return _id End Get End Property Public Property Name As String Implements MyCompany.Common.ICompany.Name Get Return _name End Get Set(ByVal value as String) _name = value End Set End Property Public Sub Save() Implements MyCompany.Common.ICompany.Save Throw New NotImplementedException() End Sub End Class Public Class CompanyFactory Implements ICompanyFactory Public Function CreateCompany() As ICompany Implements MyCompany.Common.ICompanyFactory.CreateCompany Return New Company() End Function End Class The project MyCompany.Business.dll would provide the business rules and retrieve data form the data layer: Public Class Companies Public Shared Function CreateCompany() As ICompany Dim factory as New MyCompany.Data.CompanyFactory Return factory.CreateCompany() End Function End Class Any opinions/suggestions would be greatly appreciated.

    Read the article

< Previous Page | 712 713 714 715 716 717 718 719 720 721 722 723  | Next Page >