Search Results

Search found 97840 results on 3914 pages for 'custom data type'.

Page 755/3914 | < Previous Page | 751 752 753 754 755 756 757 758 759 760 761 762  | Next Page >

  • SSIS Send Mail Task and ForceExecutionValue Error

    - by Kevin Shyr
    I tried to use the "ForcedExecutionValue" on several Send Mail Tasks and log the execution into a ExecValueVariable so that at the end of the package I can log into a table to say whether the data check is successful or not (by determine whether an email was sent out) I set up a Boolean variable that is accessible at the package level, then set up my Send Mail Task as the screenshot below with Boolean as my ForcedExecutionValueType.  When I run the package, I got the error described below. Just to make sure this is not another issue of SSIS having with Boolean type ( you also can't set variable value from xp_cmdshell of type Boolean), I used variables of types String, Int32, DateTime with the corresponding ForcedExecutionValueType.  The only way to get around this error, was to set my variable to type Object, but then when you try to get the value out later, the Object is null. I didn't spend enough time on this to see whether it's really a bug in SSIS or not, or is this just how Send Mail Task works.  Just want to log the error and will circle back on this later to narrow down the issue some more.  In the meantime, please share if you have run into the same problem.  The current workaround is to attach a script task at the end. Also, need to note 2 existing limitation: Data check needs to be done serially because every check needs to be inner join to a master table.  The master table has all the data in a single XML column and hence need to be retrieved with XQuery (a fundamental design flaw that needs to be changed) The next iteration will be to change this design into a FOR loop and pull out the checking query from a table somewhere with all the info needed for email task, but is being put to the back of the priority. Error Message: Error: 0xC001F009 at CountCheckBetweenODSAndCleanSchema: The type of the value being assigned to variable "User::WasErrorEmailEverSent" differs from the current variable type. Variables may not change type during execution. Variable types are strict, except for variables of type Object. Error: 0xC0019001 at Send Mail Task on count mismatch: The wrapper was unable to set the value of the variable specified in the ExecutionValueVariable property.   Screenshot of my Send Mail Task setup:

    Read the article

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2

    - by pinaldave
    Yesterday I wrote a real world story of how a friend who thought they have an issue with intrusion or virus whereas the issue was really in the code. I strongly suggest you read my earlier blog post Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 before continuing this blog post as this is second part of the first blog post. Let me reproduce the simple scenario in T-SQL. Building Sample Data USE [TestDB] GO -- Creating Table Products CREATE TABLE [dbo].[Products]( [ProductID] [int] NOT NULL, [ProductDesc] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )) ON [PRIMARY] GO -- Creating Table ProductDetails CREATE TABLE [dbo].[ProductDetails]( [ProductDetailID] [int] NOT NULL, [ProductID] [int] NOT NULL, [Total] [int] NOT NULL, CONSTRAINT [PK_ProductDetails] PRIMARY KEY CLUSTERED ( [ProductDetailID] ASC )) ON [PRIMARY] GO ALTER TABLE [dbo].[ProductDetails] WITH CHECK ADD CONSTRAINT [FK_ProductDetails_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Products] ([ProductID]) ON UPDATE CASCADE ON DELETE CASCADE GO -- Insert Data into Table USE TestDB GO INSERT INTO Products (ProductID, ProductDesc) SELECT 1, 'Bike' UNION ALL SELECT 2, 'Car' UNION ALL SELECT 3, 'Books' GO INSERT INTO ProductDetails ([ProductDetailID],[ProductID],[Total]) SELECT 1, 1, 200 UNION ALL SELECT 2, 1, 100 UNION ALL SELECT 3, 1, 111 UNION ALL SELECT 4, 2, 200 UNION ALL SELECT 5, 3, 100 UNION ALL SELECT 6, 3, 100 UNION ALL SELECT 7, 3, 200 GO Select Data from Tables -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Delete Data from Products Table -- Deleting Data DELETE FROM Products WHERE ProductID = 1 GO Select Data from Tables Again -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Clean up Data -- Clean up DROP TABLE ProductDetails DROP TABLE Products GO My friend was confused as there was no delete was firing over ProductsDetails Table still there was a delete happening. The reason was because there is a foreign key created between Products and ProductsDetails Table with the keywords ON DELETE CASCADE. Due to ON DELETE CASCADE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. Workaround 1: Design Changes – 3 Tables Change the design to have more than two tables. Create One Product Mater Table with all the products. It should historically store all the products list in it. No products should be ever removed from it. Add another table called Current Product and it should contain only the table which should be visible in the product catalogue. Another table should be called as ProductHistory table. There should be no use of CASCADE keyword among them. Workaround 2: Design Changes - Column IsVisible You can keep the same two tables. 1) Products and 2) ProductsDetails. Add a column with BIT datatype to it and name it as a IsVisible. Now change your application code to display the catalogue based on this column. There should be no need to delete anything. Workaround 3: Bad Advices (Bad advises begins here) The reason I have said bad advices because these are going to be bad advices for sure. You should make necessary design changes and not use poor workarounds which can damage the system and database integrity further. Here are the examples 1) Do not delete the data – well, this is not a real solution but can give time to implement design changes. 2) Do not have ON CASCADE DELETE – in this case, you will have entry in productsdetails which will have no corresponding product id and later on there will be lots of confusion. 3) Duplicate Data – you can have all the data of the product table move to the product details table and repeat them at each row. Now remove CASCADE code. This will let you delete the product table rows without any issue. There are so many things wrong this suggestion, that I will not even start here. (Bad advises ends here)  Well, did I miss anything? Please help me with your suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Perm SSIS Developer Urgently Required

    - by blakmk
      Job Role To provide dedicated data services support to the company, by designing, creating, maintaining and enhancing database objects, ensuring data quality, consistency and integrity. Migrating data from various sources to central SQL 2008 data warehouse will be the primary function. Migration of data from bespoke legacy database’s to SQL 2008 data warehouse. Understand key business requirements, Liaising with various aspects of the company. Create advanced transformations of data, with focus on data cleansing, redundant data and duplication. Creating complex business rules regarding data services, migration, Integrity and support (Best Practices). Experience ·         Minimum 3 year SSIS experience, in a project or BI Development role and involvement in at least 3 full ETL project life cycles, using the following methodologies and tools o    Excellent knowledge of ETL concepts including data migration & integrity, focusing on SSIS. o    Extensive experience with SQL 2005 products, SQL 2008 desirable. o    Working knowledge of SSRS and its integration with other BI products. o    Extensive knowledge of T-SQL, stored procedures, triggers (Table/Database), views, functions in particular coding and querying. o    Data cleansing and harmonisation. o    Understanding and knowledge of indexes, statistics and table structure. o    SQL Agent – Scheduling jobs, optimisation, multiple jobs, DTS. o    Troubleshoot, diagnose and tune database and physical server performance. o    Knowledge and understanding of locking, blocks, table and index design and SQL configuration. ·         Demonstrable ability to understand and analyse business processes. ·         Experience in creating business rules on best practices for data services. ·         Experience in working with, supporting and troubleshooting MS SQL servers running enterprise applications ·         Proven ability to work well within a team and liaise with other technical support staff such as networking administrators, system administrators and support engineers. ·         Ability to create formal documentation, work procedures, and service level agreements. ·         Ability to communicate technical issues at all levels including to a non technical audience. ·         Good working knowledge of MS Word, Excel, PowerPoint, Visio and Project.   Location Based in Crawley with possibility of some remote working Contact me for more info: http://sqlblogcasts.com/blogs/blakmk/contact.aspx      

    Read the article

  • Chrome refused to execute this JavaScript file

    - by TestSubject528491
    In the head of my HTML page, I have: <script src="https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js"></script> When I load the page in my browser (Google Chrome v 27.0.1453.116) and enable the developer tools, it says: Refused to execute script from 'https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. Indeed, the script won't run. Why does Chrome think this is a plain text file? It clearly has a .js file extension. Since I'm using HTML5, I omitted the type attribute, so I thought that might be causing the problem. So I added type="text/javascript" to the <script> tag, and got the same result. I even tried type="application/javascript" and still got the same error. Then I tried changing it to type="text/plain" just out of curiosity. The browser did not return an error, but of course the JavaScript did not run either. Finally I thought the periods in the filename might be throwing the browser off. So in my HTML code, I changed all the periods to the URL escape character %2E: <script src="https://raw.github.com/cloudhead/less%2Ejs/master/dist/less-1%2E3%2E3.js"></script> This still did not work. The only thing that truly works (i.e. the browser does not give an error and the JS successfully runs) is if I download the file, upload it to a local directory, and then change the src value to the local file. I'd rather not do this since I'm trying to save space on my own website. How do I get Chrome to recognize that the linked file is actually a JavaScript type?

    Read the article

  • How to mount drive in /media/userName/ like nautilus do using udisks

    - by Bsienn
    As of my current installation of Ubuntu 13.10 Unity, when i click on a drive in nautilus it get mounted in /media/username/mountedDrive i read that nautilus use udisks to do that. Basically i want to auto mount my drive using udisks in start up using this method But problem is, it mounts the drive in /media/mountedDrive, but i want it the way nautilus do in /media/username/mounteDrive I want NTFS Data drive to be auto mounted at /media/bsienn/ bsienn@bsienn-desktop:~$ blkid /dev/sda1: LABEL="System Reserved" UUID="8230744030743D6B" TYPE="ntfs" /dev/sda2: LABEL="Windows 7" UUID="60100EA5100E81F0" TYPE="ntfs" /dev/sda3: LABEL="Data" UUID="882C04092C03F14C" TYPE="ntfs" /dev/sda5: UUID="8768800f-59e1-41a2-9092-c0a8cb60dabf" TYPE="swap" /dev/sda6: LABEL="Ubuntu Drive" UUID="13ea474a-fb27-4c91-bae7-c45690f88954" TYPE="ext4" /dev/sda7: UUID="69c22e73-9f64-4b48-b854-7b121642cd5d" TYPE="ext4" bsienn@bsienn-desktop:~$ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160000000000 bytes 255 heads, 63 sectors/track, 19452 cylinders, total 312500000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x8d528d52 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 117730069 58761611 7 HPFS/NTFS/exFAT /dev/sda3 158690072 312494116 76902022+ 7 HPFS/NTFS/exFAT /dev/sda4 117731326 158689279 20478977 5 Extended /dev/sda5 137263104 141260799 1998848 82 Linux swap / Solaris /dev/sda6 141262848 158689279 8713216 83 Linux /dev/sda7 117731328 137263103 9765888 83 Linux Partition table entries are not in disk order bsienn@bsienn-desktop:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda7 during installation UUID=69c22e73-9f64-4b48-b854-7b121642cd5d / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=8768800f-59e1-41a2-9092-c0a8cb60dabf none swap sw 0 0 Desired effect: Picture link

    Read the article

  • Java enum pairs / "subenum" or what exactly?

    - by vemalsar
    I have an RPG-style Item class and I stored the type of the item in enum (itemType.sword). I want to store subtype too (itemSubtype.long), but I want to express the relation between two data type (sword can be long, short etc. but shield can't be long or short, only round, tower etc). I know this is wrong source code but similar what I want: enum type { sword; } //not valid code! enum swordSubtype extends type.sword { short, long } Question: How can I define this connection between two data type (or more exactly: two value of the data types), what is the most simple and standard way? Array-like data with all valid (itemType,itemSubtype) enum pairs or (itemType,itemSubtype[]) so more subtype for one type, it would be the best. OK but how can I construct this simplest way? Special enum with "subenum" set or second level enum or anything else if it does exists 2 dimensional "canBePairs" array, itemType and itemSubtype dimensions with all type and subtype and boolean elements, "true" means itemType (first dimension) and itemSubtype (second dimension) are okay, "false" means not okay Other better idea Thank you very much!

    Read the article

  • ??AMDU?????MOUNT?DISKGROUP???????

    - by Liu Maclean(???)
    AMDU?ORACLE??ASM??????????,????ASM Metadata Dump Utility(AMDU) AMDU??????????: 1. ?ASM DISK?????????????????2. ?ASM?????????????OS????,Diskgroup??mount??3. ????????,???C?????16????? ?????????AMDU??ASM DISKGROUP??????; ASM???????????????, ?????????????,?????????ASM????? ??DISKGROUP??MOUNT????????????????????????? AMDU???????, ????????ASM DISKGROUP ??MOUNT???????,???RDBMS?????ASM??????? ?? AMDU???11g??????,?????10g?ASM ???? ???????????, ORACLE DATABASE?SPFILE?CONTROLFILE?DATAFILE????ASM DISKGROUP?,?????ASM ORA-600??????MOUNT?DISKGROUP, ???????AMDU??????ASM DISK?????? ?? 1 ??? ??SPFILE?CONTROLFILE?DATAFILE ????: ???????SPFILE ,????SPFILE??PFILE???,?????????????control_files??? SQL> show parameter control_files NAME TYPE VALUE———————————— ———– ——————————control_files string +DATA/prodb/controlfile/current.260.794687955, +FRA/prodb/controlfile/current.256.794687955 ??control_files ?????ASM???????????,+DATA/prodb/controlfile/current.260.794687955 ?? 260????????+DATA ??DISKGROUP??FILE NUMBER ???????ASM DISK?DISCOVERY PATH??,??????ASM?SPFILE??asm_diskstring ???? [oracle@mlab2 oracle.SupportTools]$ unzip amdu_X86-64.zipArchive: amdu_X86-64.zipinflating: libskgxp11.soinflating: amduinflating: libnnz11.soinflating: libclntsh.so.11.1 [oracle@mlab2 oracle.SupportTools]$ export LD_LIBRARY_PATH=./ [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.260amdu_2009_10_10_20_19_17/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_19_17/[oracle@mlab2 amdu_2009_10_10_20_19_17]$ lsDATA_260.f report.txt[oracle@mlab2 amdu_2009_10_10_20_19_17]$ ls -ltotal 9548-rw-r–r– 1 oracle oinstall 9748480 Oct 10 20:19 DATA_260.f-rw-r–r– 1 oracle oinstall 9441 Oct 10 20:19 report.txt ???????DATA_260.f ??????,?????????startup mount RDBMS??: SQL> alter system set control_files=’/opt/oracle.SupportTools/amdu_2009_10_10_20_19_17/DATA_260.f’ scope=spfile; System altered. SQL> startup force mount;ORACLE instance started. Total System Global Area 1870647296 bytesFixed Size 2229424 bytesVariable Size 452987728 bytesDatabase Buffers 1409286144 bytesRedo Buffers 6144000 bytesDatabase mounted. SQL> select name from v$datafile; NAME——————————————————————————–+DATA/prodb/datafile/system.256.794687873+DATA/prodb/datafile/sysaux.257.794687875+DATA/prodb/datafile/undotbs1.258.794687875+DATA/prodb/datafile/users.259.794687875+DATA/prodb/datafile/example.265.794687995+DATA/prodb/datafile/mactbs.267.794688457 6 rows selected. startup mount???,???v$datafile????????,????????DISKGROUP??FILE NUMBER ???./amdu -diskstring ‘/dev/asm*’ -extract ???? ??????????? [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.256amdu_2009_10_10_20_22_21/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_22_21/[oracle@mlab2 amdu_2009_10_10_20_22_21]$ lsDATA_256.f report.txt[oracle@mlab2 amdu_2009_10_10_20_22_21]$ dbv file=DATA_256.f DBVERIFY: Release 11.2.0.3.0 – Production on Sat Oct 10 20:23:12 2009 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. DBVERIFY – Verification starting : FILE = /opt/oracle.SupportTools/amdu_2009_10_10_20_22_21/DATA_256.f DBVERIFY – Verification complete Total Pages Examined : 90880Total Pages Processed (Data) : 59817Total Pages Failing (Data) : 0Total Pages Processed (Index): 12609Total Pages Failing (Index): 0Total Pages Processed (Other): 3637Total Pages Processed (Seg) : 1Total Pages Failing (Seg) : 0Total Pages Empty : 14817Total Pages Marked Corrupt : 0Total Pages Influx : 0Total Pages Encrypted : 0Highest block SCN : 1125305 (0.1125305)

    Read the article

  • LLBLGen Pro v3.5 has been released!

    - by FransBouma
    Last weekend we released LLBLGen Pro v3.5! Below the list of what's new in this release. Of course, not everything is on this list, like the large amount of work we put in refactoring the runtime framework. The refactoring was necessary because our framework has two paradigms which are added to the framework at a different time, and from a design perspective in the wrong order (the paradigm we added first, SelfServicing, should have been built on top of Adapter, the other paradigm, which was added more than a year after the first released version). The refactoring made sure the framework re-uses more code across the two paradigms (they already shared a lot of code) and is better prepared for the future. We're not done yet, but refactoring a massive framework like ours without breaking interfaces and existing applications is ... a bit of a challenge ;) To celebrate the release of v3.5, we give every customer a 30% discount! Use the coupon code NR1ORM with your order :) The full list of what's new: Designer Rule based .NET Attribute definitions. It's now possible to specify a rule using fine-grained expressions with an attribute definition to define which elements of a given type will receive the attribute definition. Rules can be assigned to attribute definitions on the project level, to make it even easier to define attribute definitions in bulk for many elements in the project. More information... Revamped Project Settings dialog. Multiple project related properties and settings dialogs have been merged into a single dialog called Project Settings, which makes it easier to configure the various settings related to project elements. It also makes it easier to find features previously not used  by many (e.g. type conversions) More information... Home tab with Quick Start Guides. To make new users feel right at home, we added a home tab with quick start guides which guide you through four main use cases of the designer. System Type Converters. Many common conversions have been implemented by default in system type converters so users don't have to develop their own type converters anymore for these type conversions. Bulk Element Setting Manipulator. To change setting values for multiple project elements, it was a little cumbersome to do that without a lot of clicking and opening various editors. This dialog makes changing settings for multiple elements very easy. EDMX Importer. It's now possible to import entity model data information from an existing Entity Framework EDMX file. Other changes and fixes See for the full list of changes and fixes the online documentation. LLBLGen Pro Runtime Framework WCF Data Services (OData) support has been added. It's now possible to use your LLBLGen Pro runtime framework powered domain layer in a WCF Data Services application using the VS.NET tools for WCF Data Services. WCF Data Services is a Microsoft technology for .NET 4 to expose your domain model using OData. More information... New query specification and execution API: QuerySpec. QuerySpec is our new query specification and execution API as an alternative to Linq and our more low-level API. It's build, like our Linq provider, on top of our lower-level API. More information... SQL Server 2012 support. The SQL Server DQE allows paging using the new SQL Server 2012 style. More information... System Type converters. For a common set of types the LLBLGen Pro runtime framework contains built-in type conversions so you don't need to write your own type converters anymore. Public/NonPublic property support. It's now possible to mark a field / navigator as non-public which is reflected in the runtime framework as an internal/friend property instead of a public property. This way you can hide properties from the public interface of a generated class and still access it through code added to the generated code base. FULL JOIN support. It's now possible to perform FULL JOIN joins using the native query api and QuerySpec. It's left to the developer to check whether the used target database supports FULL (OUTER) JOINs. Using a FULL JOIN with entity fetches is not recommended, and should only be used when both participants in the join aren't the target of the fetch. Dependency Injection Tracing. It's now possible to enable tracing on dependency injection. Enable tracing at level '4' on the traceswitch 'ORMGeneral'. This will emit trace information about which instance of which type got an instance of type T injected into property P. Entity Instances in projections in Linq. It's now possible to return an entity instance in a custom Linq projection. It's now also possible to pass this instance to a method inside the query projection. Inheritance fully supported in this construct. Entity Framework support The Entity Framework has been updated in the recent year with code-first support and a new simpler context api: DbContext (with DbSet). The amount of code to generate is smaller and the context simpler. LLBLGen Pro v3.5 comes with support for DbContext and DbSet and generates code which utilizes these new classes. NHibernate support NHibernate v3.2+ built-in proxy factory factory support. By default the built-in ProxyFactoryFactory is selected. FluentNHibernate Session Manager uses 1.2 syntax. Fluent NHibernate mappings generate a SessionManager which uses the v1.2 syntax for the ProxyFactoryFactory location Optionally emit schema / catalog name in mappings Two settings have been added which allow the user to control whether the catalog name and/or schema name as known in the project in the designer is emitted into the mappings.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 29 (sys.dm_os_buffer_descriptors)

    - by Tamarick Hill
    The sys.dm_os_buffer_descriptors Dynamic Management View gives you a look into the data pages that are currently in your SQL Server buffer pool. Just in case you are not familiar with some of the internals to SQL Server and how the engine works, SQL Server only works with objects that are in memory (buffer pool). When an object such as a table needs to be read and it does not exist in the buffer pool, SQL Server will read (copy) the necessary data page(s) from disk into the buffer pool and cache it. Caching takes place so that it can be reused again and prevents the need of expensive physical reads. To better illustrate this DMV, lets query it against our AdventureWorks2012 database and view the result set. SELECT * FROM sys.dm_os_buffer_descriptors WHERE database_id = db_id('AdventureWorks2012') The first column returned from this result set is the database_id column which identifies the specific database for a given row. The file_id column represents the file that a particular buffer descriptor belongs to. The page_id column represents the ID for the data page within the buffer. The page_level column represents the index level of the data page. Next we have the allocation_unit_id column which identifies a unique allocation unit. An allocation unit is basically a set of data pages. The page_type column tells us exactly what type of page is in the buffer pool. From my screen shot above you see I have 3 distinct type of Pages in my buffer pool, Index, Data, and IAM pages. Index pages are pages that are used to build the Root and Intermediate levels of a B-Tree. A Data page would represent the actual leaf pages of a clustered index which contain the actual data for the table. Without getting into too much detail, an IAM page is Index Allocation Map page which track GAM (Global Allocation Map) pages which in turn track extents on your system. The row_count column details how many data rows are present on a given page. The free_space_in_bytes tells you how much of a given data page is still available, remember pages are 8K in size. The is_modified signifies whether or not a page has been changed since it has been read into memory, .ie a dirty page. The numa_node column represents the Nonuniform memory access node for the buffer. Lastly is the read_microsec column which tells you how many microseconds it took for a data page to be read (copied) into the buffer pool. This is a great DMV for use when you are tracking down a memory issue or if you just want to have a look at what type of pages are currently in your buffer pool. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms173442.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Event Logging in LINQ C# .NET

    The first thing you'll want to do before using this code is to create a table in your database called TableHistory: CREATE TABLE [dbo].[TableHistory] (     [TableHistoryID] [int] IDENTITY NOT NULL ,     [TableName] [varchar] (50) NOT NULL ,     [Key1] [varchar] (50) NOT NULL ,     [Key2] [varchar] (50) NULL ,     [Key3] [varchar] (50) NULL ,     [Key4] [varchar] (50) NULL ,     [Key5] [varchar] (50) NULL ,     [Key6] [varchar] (50)NULL ,     [ActionType] [varchar] (50) NULL ,     [Property] [varchar] (50) NULL ,     [OldValue] [varchar] (8000) NULL ,     [NewValue] [varchar] (8000) NULL ,     [ActionUserName] [varchar] (50) NOT NULL ,     [ActionDateTime] [datetime] NOT NULL ) Once you have created the table, you'll need to add it to your custom LINQ class (which I will refer to as DboDataContext), thus creating the TableHistory class. Then, you'll need to add the History.cs file to your project. You'll also want to add the following code to your project to get the system date: public partial class DboDataContext{ [Function(Name = "GetDate", IsComposable = true)] public DateTime GetSystemDate() { MethodInfo mi = MethodBase.GetCurrentMethod() as MethodInfo; return (DateTime)this.ExecuteMethodCall(this, mi, new object[] { }).ReturnValue; }}private static Dictionary<type,> _cachedIL = new Dictionary<type,>();public static T CloneObjectWithIL<t>(T myObject){ Delegate myExec = null; if (!_cachedIL.TryGetValue(typeof(T), out myExec)) { // Create ILGenerator DynamicMethod dymMethod = new DynamicMethod("DoClone", typeof(T), new Type[] { typeof(T) }, true); ConstructorInfo cInfo = myObject.GetType().GetConstructor(new Type[] { }); ILGenerator generator = dymMethod.GetILGenerator(); LocalBuilder lbf = generator.DeclareLocal(typeof(T)); //lbf.SetLocalSymInfo("_temp"); generator.Emit(OpCodes.Newobj, cInfo); generator.Emit(OpCodes.Stloc_0); foreach (FieldInfo field in myObject.GetType().GetFields( System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.NonPublic)) { // Load the new object on the eval stack... (currently 1 item on eval stack) generator.Emit(OpCodes.Ldloc_0); // Load initial object (parameter) (currently 2 items on eval stack) generator.Emit(OpCodes.Ldarg_0); // Replace value by field value (still currently 2 items on eval stack) generator.Emit(OpCodes.Ldfld, field); // Store the value of the top on the eval stack into // the object underneath that value on the value stack. // (0 items on eval stack) generator.Emit(OpCodes.Stfld, field); } // Load new constructed obj on eval stack -> 1 item on stack generator.Emit(OpCodes.Ldloc_0); // Return constructed object. --> 0 items on stack generator.Emit(OpCodes.Ret); myExec = dymMethod.CreateDelegate(typeof(Func<t,>)); _cachedIL.Add(typeof(T), myExec); } return ((Func<t,>)myExec)(myObject);}I got both of the above methods off of the net somewhere (maybe even from CodeProject), but it's been long enough that I can't recall where I got them.Explanation of the History ClassThe History class records changes by creating a TableHistory record, inserting the values for the primary key for the table being modified into the Key1, Key2, ..., Key6 columns (if you have more than 6 values that make up a primary key on any table, you'll want to modify this), setting the type of change being made in the ActionType column (INSERT, UPDATE, or DELETE), old value and new value if it happens to be an update action, and the date and Windows identity of the user who made the change.Let's examine what happens when a call is made to the RecordLinqInsert method:public static void RecordLinqInsert(DboDataContext dbo, IIdentity user, object obj){ TableHistory hist = NewHistoryRecord(obj); hist.ActionType = "INSERT"; hist.ActionUserName = user.Name; hist.ActionDateTime = dbo.GetSystemDate(); dbo.TableHistories.InsertOnSubmit(hist);}private static TableHistory NewHistoryRecord(object obj){ TableHistory hist = new TableHistory(); Type type = obj.GetType(); PropertyInfo[] keys; if (historyRecordExceptions.ContainsKey(type)) { keys = historyRecordExceptions[type].ToArray(); } else { keys = type.GetProperties().Where(o => AttrIsPrimaryKey(o)).ToArray(); } if (keys.Length > KeyMax) throw new HistoryException("object has more than " + KeyMax.ToString() + " keys."); for (int i = 1; i <= keys.Length; i++) { typeof(TableHistory) .GetProperty("Key" + i.ToString()) .SetValue(hist, keys[i - 1].GetValue(obj, null).ToString(), null); } hist.TableName = type.Name; return hist;}protected static bool AttrIsPrimaryKey(PropertyInfo pi){ var attrs = from attr in pi.GetCustomAttributes(typeof(ColumnAttribute), true) where ((ColumnAttribute)attr).IsPrimaryKey select attr; if (attrs != null && attrs.Count() > 0) return true; else return false;}RecordLinqInsert takes as input a data context which it will use to write to the database, the user, and the LINQ object to be recorded (a single object, for instance, a Customer or Order object if you're using AdventureWorks). It then calls the NewHistoryRecord method, which uses LINQ to Objects in conjunction with the AttrIsPrimaryKey method to pull all the primary key properties, set the Key1-KeyN properties of the TableHistory object, and return the new TableHistory object. The code would be called in an application, like so: Continue span.fullpost {display:none;}

    Read the article

  • SQL SERVER – Order By Numeric Values Formatted as String

    - by pinaldave
    When I was writing this blog post I had a hard time to come up with the title of the blog post so I did my best to come up with one. Here is the reason why? I wrote a blog post earlier SQL SERVER – Find First Non-Numeric Character from String. One of the questions was that how that blog can be useful in real life scenario. This blog post is the answer to that question. Let us first see a problem. We have a table which has a column containing alphanumeric data. The data always has first as an integer and later part as a string. The business need is to order the data based on the first part of the alphanumeric data which is an integer. Now the problem is that no matter how we use ORDER BY the result is not produced as expected. Let us understand this with example. Prepare a sample data: -- How to find first non numberic character USE tempdb GO CREATE TABLE MyTable (ID INT, Col1 VARCHAR(100)) GO INSERT INTO MyTable (ID, Col1) SELECT 1, '1one' UNION ALL SELECT 2, '11eleven' UNION ALL SELECT 3, '2two' UNION ALL SELECT 4, '22twentytwo' UNION ALL SELECT 5, '111oneeleven' GO -- Select Data SELECT * FROM MyTable GO The above query will give following result set. Now let us use ORDER BY COL1 and observe the result along with Original SELECT. -- Select Data SELECT * FROM MyTable GO -- Select Data SELECT * FROM MyTable ORDER BY Col1 GO The result of the table is not as per expected. We need the result in following format. Here is the good example of how we can use PATINDEX. -- Use of PATINDEX SELECT ID, LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) 'Numeric Character', Col1 'Original Character' FROM MyTable ORDER BY LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) GO We can use PATINDEX to identify the length of the digit part in the alphanumeric string (Remember: Our string has a first part as an int always. It will not work in any other scenario). Now you can use the LEFT function to extract the INT portion from the alphanumeric string and order the data according to it. You can easily clean up the script by dropping following table. DROP TABLE MyTable GO Here is the complete script so you can easily refer it. -- How to find first non numberic character USE tempdb GO CREATE TABLE MyTable (ID INT, Col1 VARCHAR(100)) GO INSERT INTO MyTable (ID, Col1) SELECT 1, '1one' UNION ALL SELECT 2, '11eleven' UNION ALL SELECT 3, '2two' UNION ALL SELECT 4, '22twentytwo' UNION ALL SELECT 5, '111oneeleven' GO -- Select Data SELECT * FROM MyTable GO -- Select Data SELECT * FROM MyTable ORDER BY Col1 GO -- Use of PATINDEX SELECT ID, Col1 'Original Character' FROM MyTable ORDER BY LEFT(Col1,PATINDEX('%[^0-9]%',Col1)-1) GO DROP TABLE MyTable GO Well, isn’t it an interesting solution. Any suggestion for better solution? Additionally any suggestion for changing the title of this blog post? Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL String, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints

    - by pinaldave
    I earlier wrote about SQL BI Quiz over here and here. The details of the quiz is here: Working with huge data is very common when it is about Data Warehousing. It is necessary to create Cubes on the data to make it meaningful and consumable. There are cases when retrieving the data from cube takes lots of the time. Let us assume that your cube is returning you data very quickly. Suddenly on one day it is returning the data very slowly. What are the three things will you to diagnose this. After diagnose what you will do to resolve performance issue. Participate in my question over here I required BI Expert Jason Thomas to help with few hints to blog readers. He is one of the leading SSAS expert and writes a complicated subject in simple words. If queries were executing properly before but now take a long time to return the data, it means that there has been a change in the environment in which it is running. Some possible changes are listed below:-  1) Data factors:- Compare the data size then and now. Increase in data can result in different execution times. Poorly written queries as well as poor design will not start showing issues till the data grows. How to find it out? (Ans : SQL Server profiler and Perfmon Counters can be used for identifying the issues and performance  tuning the MDX queries)  2) Internal Factors:- Is some slow MDX query / multiple mdx queries running at the same time, which was not running when you had tested it before? Is there any locking happening due to proactive caching or processing operations? Are the measure group caches being cleared by processing operations? (Ans : Again, profiler and perfmon counters will help in finding it out. Load testing can be done using AS Performance Workbench (http://asperfwb.codeplex.com/) by running multiple queries at once)  3) External factors:- Is some other application competing for the same resources?  HINT : Read “Identifying and Resolving MDX Query Performance Bottlenecks in SQL Server 2005 Analysis Services” (http://sqlcat.com/whitepapers/archive/2007/12/16/identifying-and-resolving-mdx-query-performance-bottlenecks-in-sql-server-2005-analysis-services.aspx) Well, these are great tips. Now win big prizes by participate in my question over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Architecture for a template-building, WYSIWIG application

    - by Sam Selikoff
    I'm building a WYSIWYG designer in Ember.js. The designer will allow users to create campaigns - think MailChimp. To build a campaign, users will choose an existing template. The template will have a defined layout. The user will then be taken to the designer, where he will be able to edit the text and style, and additionally change some layout options. I've been thinking about how best to go about structuring this app, and there are a few hurdles. Specifically, the output of the campaign will be dynamic: eventually, it will be published somewhere, and when the consumers (not my users, but the people clicking on the campaign that my user created) visit the campaign, certain pieces of data will change, depending on the type of consumer viewing the campaign. That means the ultimate output of the designer will be a dynamic site. The data that is dynamic for this site - the end product - will not be manipulated by the user in the designer. However, the data that will be manipulated by the user in the designer are things like copy, styles, layout options, etc. I'll call the first set of variables server-side data, and the second client-side data. It seems, then, that the process will go something like this: I'll need to create templates for this designer that have two dynamic segments. For instance, the server-side data could be Liquid expressions, and the client-side data Handlebars expressions. When the user creates a campaign, I would compile the template on the back end using some dummy data for the server-side variables, and serve up a handlebars template to the Ember app. The user would then edit the template, and the Ember app would save all his edits to the JS variables that were powering the template. This way he'd be able to preview the template. When he saves, he'll send back the selected template, along with all the data and options he's made. When it comes time to publish, the back-end system will have to do two things: compile the template with Handlebars using the campaign data, and then compile the template with Liquid using the server-side data Is my thinking roughly accurate about this, or is there a simpler way?

    Read the article

  • What's Old is New Again

    - by David Dorf
    Last night I told my son he could stream music to his tablet "from the cloud" (in this case, the Amazon Cloud).  He paused, then said, "what is the cloud?"  I replied, "a bunch of servers connected to the internet."  Apparently he had visions of something much more magnificent.  Another similar term is "big data."  These marketing terms help to quickly convey topics but are oversimplifications that are open to many interpretations.  At their core, those terms a shiny packages holding recycled ideas. I see many headlines declaring big data changes everything, but it doesn't.  Savvy retailers have been dealing with large volumes of data since the electronic cash register was invented.  But the there have a been a few changes to the landscape that make big data a topic of conversation: 1. Computing power has caught up to storage volumes. Its now possible to more thoroughly analyze the copious volumes of data retailers have been squirreling away.  CPUs are faster, sold state drives more plentiful, and new ways to store and search data are available.  My iPhone is more power than the computer used in the Apollo mission to the moon. 2. Unstructured data is everywhere.  The Web used to be where retailers published product information, but now users are generating the bulk of the content in the form of comments, videos, and "likes."  The variety of information available to retailers is huge, and it meaning difficult to discern. 3. Everything is connected.  Looking at a report from my router, there are no less than 20 active devices on my home network.  We can track the location of mobile phones, tag products with RFID, and set our thermostats (I love my Nest) from a thousand miles away.  Not only is there more data, but its arriving at higher velocity. Careful readers will note the three Vs that help define so-called big data: volume, variety, and velocity. We now have more volume, more variety, and more velocity and different technologies to deal with them.  But at the heart, the objectives are still the same: Informed decisions Accurate forecasts Improved optimizations So don't let the term "big data" throw you off the scent.  Retailers still need to execute on the basics.  But do take a fresh look at the data that's available and the new technologies to process it.  The landscape will continue to change and agile organizations will always be reevaluating their approaches.  You can just add some more weapons to the arsenal.

    Read the article

  • WebSocket API 1.1 released!

    - by Pavel Bucek
    Its my please to announce that JSR 356 – Java API for WebSocket maintenance release ballot vote finished with majority of “yes” votes (actually, only one eligible voter did not vote, all other votes were “yeses”). New release is maintenance release and it addresses only one issue:  WEBSOCKET_SPEC-226. What changed in the 1.1? Version 1.1 is fully backwards compatible with version 1.0, there are only two methods added to javax.websocket.Session: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 /** * Register to handle to incoming messages in this conversation. A maximum of one message handler per * native websocket message type (text, binary, pong) may be added to each Session. I.e. a maximum * of one message handler to handle incoming text messages a maximum of one message handler for * handling incoming binary messages, and a maximum of one for handling incoming pong * messages. For further details of which message handlers handle which of the native websocket * message types please see {@link MessageHandler.Whole} and {@link MessageHandler.Partial}. * Adding more than one of any one type will result in a runtime exception. * * @param clazz   type of the message processed by message handler to be registered. * @param handler whole message handler to be added. * @throws IllegalStateException if there is already a MessageHandler registered for the same native *                               websocket message type as this handler. */ public void addMessageHandler(Class<T> clazz, MessageHandler.Whole<T> handler); /** * Register to handle to incoming messages in this conversation. A maximum of one message handler per * native websocket message type (text, binary, pong) may be added to each Session. I.e. a maximum * of one message handler to handle incoming text messages a maximum of one message handler for * handling incoming binary messages, and a maximum of one for handling incoming pong * messages. For further details of which message handlers handle which of the native websocket * message types please see {@link MessageHandler.Whole} and {@link MessageHandler.Partial}. * Adding more than one of any one type will result in a runtime exception. * * * @param clazz   type of the message processed by message handler to be registered. * @param handler partial message handler to be added. * @throws IllegalStateException if there is already a MessageHandler registered for the same native *                               websocket message type as this handler. */ public void addMessageHandler(Class<T> clazz, MessageHandler.Partial<T> handler); Why do we need to add those methods? Short and not precise version: to support Lambda expressions as MessageHandlers. Longer and slightly more precise explanation: old Session#addMessageHandler method (which is still there and works as it worked till now) does rely on getting the generic parameter during the runtime, which is not (always) possible. The unfortunate part is that it works for some common cases and the expert group did not catch this issue before 1.0 release because of that. The issue is really clearly visible when Lambdas are used as message handlers: 1 2 3 session.addMessageHandler(message -> { System.out.println("### Received: " + message); }); There is no way for the JSR 356 implementation to get the type of the used Lambda expression, thus this call will always result in an exception. Since all modern IDEs do recommend to use Lambda expressions when possible and MessageHandler interfaces are single method interfaces, it basically just scream “use Lambdas” all over the place but when you do that, the application will fail during runtime. Only solution we currently have is to explicitly provide the type of registered MessageHandler. (There might be another sometime in the future when generic type reification is introduced, but that is not going to happen soon enough). So the example above will then be: 1 2 3 session.addMessageHandler(String.class, message -> { System.out.println("### Received: " + message); }); and voila, it works. There are some limitations – you cannot do 1 List<String>.class , so you will need to encapsulate these types when you want to use them in MessageHandler implementation (something like “class MyType extends ArrayList<String>”). There is no better way how to solve this issue, because Java currently does not provide good way how to describe generic types. The api itself is available on maven central, look for javax.websocket:javax.websocket-api:1.1. The reference implementation is project Tyrus, which implements WebSocket API 1.1 from version 1.8.

    Read the article

  • Loading XML file containing leading zeros with SSIS preserving the zeros

    - by Compudicted
    Visiting the MSDN SQL Server Integration Services Forum oftentimes I could see that people would pop up asking this question: “why I am not able to load an element from an XML file that contains zeros so the leading/trailing zeros would remain intact?”. I started to suspect that such a trivial and often-required operation perhaps is being misunderstood by the developer community. I would also like to add that the whole state of affairs surrounding the XML today is probably also going to be increasingly affected by a motion of people who dislike XML in general and many aspects of it as XSD and XSLT invoke a negative reaction at best. Nevertheless, XML is in wide use today and its importance as a bridge between diverse systems is ever increasing. Therefore, I deiced to write up an example of loading an arbitrary XML file that contains leading zeros in one of its elements using SSIS so the leading zeros would be preserved keeping in mind the goal on simplicity into a table in SQL Server database. To start off bring up your BIDS (running as admin) and add a new Data Flow Task (DFT). This DFT will serve as container to adding our XML processing elements (besides, the XML Source is not available anywhere else other than from within the DFT). Double-click your DFT and drag and drop the XMS Source component from the Tool Box’s Data Flow Sources. Now, let the fun begin! Being inspired by the upcoming Christmas I created a simple XML file with one set of data that contains an imaginary SSN number of Rudolph containing several leading zeros like 0000003. This file can be viewed here. To configure the XML Source of course it is quite intuitive to point it to our XML file, next what the XML source needs is either an embedded schema (XSD) or it can generate one for us. In lack of the one I opted to auto-generate it for me and I ended up with an XSD that looked like: <?xml version="1.0"?> <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="XMasEvent"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" name="CaseInfo"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" name="ID" type="xs:unsignedByte" /> <xs:element minOccurs="0" name="CreatedDate" type="xs:unsignedInt" /> <xs:element minOccurs="0" name="LastName" type="xs:string" /> <xs:element minOccurs="0" name="FirstName" type="xs:string" /> <xs:element minOccurs="0" name="SSN" type="xs:unsignedByte" /> <!-- Becomes string -- > <xs:element minOccurs="0" name="DOB" type="xs:unsignedInt" /> <xs:element minOccurs="0" name="Event" type="xs:string" /> <xs:element minOccurs="0" name="ClosedDate" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> As an aside on the XML file: if your XML file does not contain the outer node (<XMasEvent>) then you may end up in a situation where you see just one field in the output. Now please note that the SSN element’s data type was chosen to be of unsignedByte (and this is for a reason). The reason is stemming from the fact all our figures in the element are digits, this is good, but this is not exactly what we need, because if we will attempt to load the data with this XSD then we are going to either get errors on the destination or most typically lose the leading zeros. So the next intuitive choice is to change the data type to string. Besides, if a SSIS package was already created based on this XSD and the data type change was done thereafter, one should re-set the metadata by right-clicking the XML Source and choosing “Advanced Editor” in which there is a refresh button at the bottom left which will do the trick. So far so good, we are ready to load our XML file, well actually yes, and no, in my experience typically some data conversion may be required. So depending on your data destination you may need to tweak the data types targeted. Let’s add a Data Conversion Task to our DFT. Your package should look like: To make the story short I only will cover the SSN field, so in my data source the target SQL Table has it as nchar(10) and we chose string in our XSD (yes, this is a big difference), under such circumstances the SSIS will complain. So will go and manipulate on the data type of SSN by making it Unicode String (DT_WSTR), World String per se. The conversion should look like: The peek at the Metadata: We are almost there, now all we need is to configure the destination. For simplicity I chose SQL Server Destination. The mapping is a breeze, F5 and I am able to insert my data into SQL Server now! Checking the zeros – they are all intact!

    Read the article

  • Android: Custom ListAdapter extending BaseAdapter crashes on application launch.

    - by Danny
    Data being pulled from a local DB, then mapped using a cursor. Custom Adapter displays data similar to a ListView. As items are added/deleted from the DB, the adapter is supposed to refresh. The solution attempted below crashes the application at launch. Any suggestions? Thanks in advance, -D @Override public View getView(int position, View convertView, ViewGroup parent) { View v = convertView; ViewGroup p = parent; if (v == null) { LayoutInflater vi = (LayoutInflater)context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); v = vi.inflate(R.layout.items_row, p); } int size = mAdapter.getCount(); Log.d(TAG, "position " + position + " Size " + size); if(size != 0){ if(position < size) return mAdapter.getView(position, v, p); Log.d(TAG, "-position " + position + " Size " + size); } return null; } Errors: 03-23 00:14:10.392: ERROR/AndroidRuntime(718): java.lang.UnsupportedOperationException: addView(View, LayoutParams) is not supported in AdapterView 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.AdapterView.addView(AdapterView.java:461) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.LayoutInflater.inflate(LayoutInflater.java:415) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.LayoutInflater.inflate(LayoutInflater.java:320) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.LayoutInflater.inflate(LayoutInflater.java:276) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at com.xyz.abc.CustomSeparatedListAdapter.getView(CustomSeparatedListAdapter.java:90) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.AbsListView.obtainView(AbsListView.java:1273) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.ListView.makeAndAddView(ListView.java:1658) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.ListView.fillDown(ListView.java:637) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.ListView.fillFromTop(ListView.java:694) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.ListView.layoutChildren(ListView.java:1516) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.AbsListView.onLayout(AbsListView.java:1112) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.View.layout(View.java:6569) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.View.layout(View.java:6569) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.View.layout(View.java:6569) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.View.layout(View.java:6569) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1119) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.LinearLayout.layoutVertical(LinearLayout.java:998) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.LinearLayout.onLayout(LinearLayout.java:918) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.View.layout(View.java:6569) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.View.layout(View.java:6569) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.View.layout(View.java:6569) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.View.layout(View.java:6569) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.ViewRoot.performTraversals(ViewRoot.java:979) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.view.ViewRoot.handleMessage(ViewRoot.java:1613) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.os.Handler.dispatchMessage(Handler.java:99) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.os.Looper.loop(Looper.java:123) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at android.app.ActivityThread.main(ActivityThread.java:4203) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at java.lang.reflect.Method.invokeNative(Native Method) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at java.lang.reflect.Method.invoke(Method.java:521) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 03-23 00:14:10.392: ERROR/AndroidRuntime(718): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • Custom Geo Tagging. Name to position and position to name

    - by Toni Michel Caubet
    Hello there i am implementing a custom geo tagging system, ok Array where i store the cordenades of each place /* ******Opciones del etiquetado del mapa*** */ var TagSpeed = 800; //el tiempo de la animacion var animate = true; //false = fadeIn / true = animate var paises = { "isora": { leftX: '275', topY: '60', name: 'Gran Melia Palacio de Isora' }, "pepe": { leftX: '275', topY: '60', name: 'Gran Melia de Don Pepe' }, "australia": { leftX: '565', topY: '220', name: 'Gran Melia Uluru' }, "otro": { // ejemplo leftX: '565', // cordenada x topY: '220', // cordenada y name: 'soy otro' // nombre a mostrar } /* <==> <span class="otro mPointer">isora</span> */ } /**/ this is how i check with js function escucharMapa(){ /*fOpciones*/ $('.mPointer').bind('mouseover',function(){ var clase = $(this).attr('class').replace(" mPointer", ""); var left_ = paises[clase].leftX; var top_ = paises[clase].topY; var name_ = paises[clase].name; $('.arrow .text').html(name_); /*Esta linea centra la etiqueta del hotel con la flecha. Si cambia el tamaño de fuente o la typo, habrá que cambiar el 3.3*/ $('.arrow .text').css({'marginLeft':'-'+$('.arrow .text').html().length*3.3+'px'}); $('.arrow').css({top:'-60px',left:left_+'px'}); if(animate) $('.arrow').show().stop().animate({'top':top_},TagSpeed,'easeInOutBack'); else $('.arrow').css({'top':top_+'px'}).fadeIn(500); }); $('.mPointer').bind('mouseleave',function(){ if(animate) $('.arrow').stop().animate({'top':'0px'},100,function(){ $('.arrow').hide() }); else $('.arrow').hide(); }); } /*Inicio gestion geoEtiquetado*/ $(document).ready(function(){ escucharMapa(); }); HTML <div style="float:left;height:500px;"> <div class="map"> <div class="arrow"> <div class="text"></div> <img src="img/flecha.png"/> </div> <!--mapa--> <img src="http://www.freepik.es/foto-gratis/mapa-del-mundo_17-903095345.jpg" id="img1"/> <br/> <br/> <span class="isora mPointer">isora</span> <span class="pepe mPointer">Pepe</span> <span class="australia mPointer">Australia</span> </div> </div> OK so you have vew items and when you hover one, it gets the classname, it checks the cordinades in the object and displays a cursor in those cordinades of the image, right? ok so how can i do the opposite? lets say if user hovers +-30px error margin (top and left) an area in the map the item is highlighted??? i was considering -on map image mouse over - get the offset of the mouse - if is in the margin error area -show else -no show But that does not look really efficient as long as it would have to caculate each pixel movement, no?

    Read the article

  • How to configure custom binding to consume this WS secure Webservice using WCF?

    - by Soeteman
    Hello all, I'm trying to configure a WCF client to be able to consume a webservice that returns the following response message: Response message <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns0="http://myservice.wsdl"> <env:Header> <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" env:mustUnderstand="1" /> </env:Header> <env:Body> <ns0:StatusResponse> <result> ... </result> </ns0:StatusResponse> </env:Body> </env:Envelope> To do this, I've constructed a custom binding (which doesn't work). I keep getting a "Security header is empty" message. My binding: <customBinding> <binding name="myCustomBindingForVestaServices"> <security authenticationMode="UserNameOverTransport" messageSecurityVersion="WSSecurity11WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11" securityHeaderLayout="Strict" includeTimestamp="false" requireDerivedKeys="true"> </security> <textMessageEncoding messageVersion="Soap11" /> <httpsTransport authenticationScheme="Negotiate" requireClientCertificate ="false" realm =""/> </binding> </customBinding> My request seems to be using the same SOAP and WS Security versions as the response, but use different namespace prefixes ("o" instead of "wsse"). Could this be the reason why I keep getting the "Security header is empty" message? Request message <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-d3b70d1f-0ebb-4a79-85e6-34f0d6aa3d0f-1"> <o:Username>user</o:Username> <o:Password>pass</o:Password> </o:UsernameToken> </o:Security> </s:Header> <s:Body> <getPrdStatus xmlns="http://myservice.wsdl"> <request xmlns="" xmlns:a="http://myservice.wsdl" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> ... </request> </getPrdStatus> </s:Body> </s:Envelope> How do I need to configure my WCF client binding to be able to consume this webservice? Any help greatly appreciated! Sander

    Read the article

  • c# send recive object over network?

    - by Data-Base
    Hello, I'm working on a server/client project the client will be asking the server for info and the server will send them back to the client the info may be string,number, array, list, arraylist or any other object I found allot of examples but I faced issues!!!! the solution I found so far is to serialize the object (data) and send it then de-serialize it to process here is the server code public void RunServer(string SrvIP,int SrvPort) { try { var ipAd = IPAddress.Parse(SrvIP); /* Initializes the Listener */ if (ipAd != null) { var myList = new TcpListener(ipAd, SrvPort); /* Start Listeneting at the specified port */ myList.Start(); Console.WriteLine("The server is running at port "+SrvPort+"..."); Console.WriteLine("The local End point is :" + myList.LocalEndpoint); Console.WriteLine("Waiting for a connection....."); while (true) { Socket s = myList.AcceptSocket(); Console.WriteLine("Connection accepted from " + s.RemoteEndPoint); var b = new byte[100]; int k = s.Receive(b); Console.WriteLine("Recieved..."); for (int i = 0; i < k; i++) Console.Write(Convert.ToChar(b[i])); string cmd = Encoding.ASCII.GetString(b); if (cmd.Contains("CLOSE-CONNECTION")) break; var asen = new ASCIIEncoding(); // sending text s.Send(asen.GetBytes("The string was received by the server.")); // the line bove to be modified to send serialized object? Console.WriteLine("\nSent Acknowledgement"); s.Close(); Console.ReadLine(); } /* clean up */ myList.Stop(); } } catch (Exception e) { Console.WriteLine("Error..... " + e.StackTrace); } } here is the client code that should return an object public object runClient(string SrvIP, int SrvPort) { object obj = null; try { var tcpclnt = new TcpClient(); Console.WriteLine("Connecting....."); tcpclnt.Connect(SrvIP, SrvPort); // use the ipaddress as in the server program Console.WriteLine("Connected"); Console.Write("Enter the string to be transmitted : "); var str = Console.ReadLine(); Stream stm = tcpclnt.GetStream(); var asen = new ASCIIEncoding(); if (str != null) { var ba = asen.GetBytes(str); Console.WriteLine("Transmitting....."); stm.Write(ba, 0, ba.Length); } var bb = new byte[2000]; var k = stm.Read(bb, 0, bb.Length); string data = null; for (var i = 0; i < k; i++) Console.Write(Convert.ToChar(bb[i])); //convert to object code ?????? Console.ReadLine(); tcpclnt.Close(); } catch (Exception e) { Console.WriteLine("Error..... " + e.StackTrace); } return obj; } I need to know a good serialize/serialize and how to integrate it into the solution above :-( I would be really thankful for any help cheers

    Read the article

  • Why does my TextBox with custom control template not have a visible text cursor?

    - by Philipp Schmid
    I have a custom control template which is set via the style property on a TextBox. The visual poperties are set correctly, even typing to the textbox works, but there is no insertion cursor (the | symbol) visible which makes editing challenging for our users. How does the control template need changing to get the traditional TextBox behavior back? <Style x:Key="DemandEditStyle" TargetType="TextBox"> <EventSetter Event="LostFocus" Handler="DemandLostFocus" /> <Setter Property="HorizontalAlignment" Value="Stretch" /> <Setter Property="VerticalAlignment" Value="Stretch" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate> <Grid HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="1" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="*" /> <RowDefinition Height="1" /> </Grid.RowDefinitions> <Grid.Background> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <GradientStop Color="White" Offset="0" /> <GradientStop Color="White" Offset="0.15" /> <GradientStop Color="#EEE" Offset="1" /> </LinearGradientBrush> </Grid.Background> <Border Grid.Row="1" Grid.Column="0" Grid.ColumnSpan="2" Background="Black" /> <Border Grid.Row="0" Grid.Column="1" Grid.RowSpan="2" Background="Black" /> <Grid Grid.Row="0" Grid.Column="0" Margin="2"> <Grid.ColumnDefinitions> <ColumnDefinition Width="1" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="1" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="1" /> <RowDefinition Height="*" /> <RowDefinition Height="1" /> </Grid.RowDefinitions> <Border Grid.Row="0" Grid.Column="0" Grid.ColumnSpan="3" Background="Black" /> <Border Grid.Row="0" Grid.Column="0" Grid.RowSpan="3" Background="Black" /> <Border Grid.Row="2" Grid.Column="0" Grid.ColumnSpan="3" Background="#CCC" /> <Border Grid.Row="0" Grid.Column="2" Grid.RowSpan="3" Background="#CCC" /> <TextBlock Grid.Row="1" Grid.Column="1" TextAlignment="Right" HorizontalAlignment="Center" VerticalAlignment="Center" Padding="3 0 3 0" Background="Yellow" Text="{Binding RelativeSource={RelativeSource TemplatedParent}, Path=Text}" Width="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type Grid}, AncestorLevel=1}, Path=ActualWidth}" /> </Grid> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> Update: Replacing the inner-most TextBox with a ScrollViewer and naming it PART_ContentHost indeed shows the text insertion cursor. Trying to right-align the text in the TextBox by either setting the HorizontalContentAlignment in the Style or as a property on the ScrollViewer were unsuccessful. Suggestions?

    Read the article

  • Apache server still running but user can not connect website, after "sudo apachectl restart" user can connect website, what'r wrong? [on hold]

    - by Tinyfool
    My website is http://ourcoders.com/, recently I found sometime user report can not connect to my website, but I ssh to server, I found Apache still running, like this: root@AY1401261057077842eaZ:~# ps aux|grep apache root 873 0.0 1.3 290496 13528 ? Ss Aug18 0:28 /usr/sbin/apache2 -k start www-data 3490 0.0 1.8 299004 18764 ? S Aug21 0:01 /usr/sbin/apache2 -k start www-data 3612 0.0 1.5 296008 15540 ? S Aug21 0:03 /usr/sbin/apache2 -k start www-data 3860 0.0 1.5 296636 16268 ? S Aug21 0:00 /usr/sbin/apache2 -k start www-data 3913 0.0 1.2 295468 13084 ? S Aug21 0:00 /usr/sbin/apache2 -k start www-data 3931 0.0 1.7 298488 18228 ? S 16:02 0:01 /usr/sbin/apache2 -k start www-data 3938 0.0 1.9 299128 19724 ? S 16:02 0:02 /usr/sbin/apache2 -k start www-data 4465 0.0 1.6 296688 16404 ? S Aug21 0:00 /usr/sbin/apache2 -k start www-data 5075 0.0 1.2 295468 13044 ? S 16:16 0:00 /usr/sbin/apache2 -k start www-data 5153 0.0 1.5 295880 15612 ? S 16:17 0:00 /usr/sbin/apache2 -k start www-data 5770 0.0 1.5 296608 16016 ? S 16:30 0:00 /usr/sbin/apache2 -k start www-data 5773 0.0 1.6 296948 16640 ? S 16:30 0:00 /usr/sbin/apache2 -k start www-data 5816 0.0 1.6 297216 16976 ? S 16:31 0:01 /usr/sbin/apache2 -k start www-data 5918 0.0 1.7 298228 17820 ? S 16:33 0:01 /usr/sbin/apache2 -k start www-data 6023 0.0 1.9 299864 19840 ? S 16:35 0:13 /usr/sbin/apache2 -k start www-data 6073 0.0 1.7 298480 18120 ? S 16:36 0:02 /usr/sbin/apache2 -k start www-data 6088 0.0 2.0 300488 21008 ? S 16:36 0:12 /usr/sbin/apache2 -k start www-data 6114 0.0 1.7 298548 18268 ? S 16:37 0:12 /usr/sbin/apache2 -k start www-data 6134 0.0 1.6 296688 16532 ? S 16:37 0:04 /usr/sbin/apache2 -k start www-data 6193 0.0 1.7 297908 17420 ? S 16:38 0:08 /usr/sbin/apache2 -k start www-data 6821 0.0 1.8 299556 19072 ? S 16:43 0:11 /usr/sbin/apache2 -k start www-data 7058 0.0 1.7 298676 18204 ? S 16:48 0:10 /usr/sbin/apache2 -k start www-data 7065 0.0 1.8 299028 18868 ? S 16:48 0:11 /usr/sbin/apache2 -k start www-data 7084 0.0 1.8 299508 19020 ? S 16:48 0:11 /usr/sbin/apache2 -k start www-data 7221 0.0 1.8 299160 18768 ? S 16:51 0:09 /usr/sbin/apache2 -k start www-data 11453 0.0 1.7 298484 18256 ? S 09:39 0:02 /usr/sbin/apache2 -k start root 26324 0.0 0.0 8084 920 pts/0 S+ 22:52 0:00 grep --color=auto apache root 28517 0.0 0.0 4404 612 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28518 0.0 0.0 4404 616 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28519 0.0 0.0 4404 612 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28520 0.0 0.0 4404 616 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28521 0.0 0.0 4312 552 ? S Aug21 0:00 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28522 0.0 0.0 4308 548 ? S Aug21 0:07 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28523 0.0 0.0 4176 352 ? S Aug21 0:00 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28524 0.0 0.0 4180 356 ? S Aug21 0:00 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log Today's only error log is blow. [Sat Aug 23 22:52:47 2014] [notice] SIGHUP received. Attempting to restart [Sat Aug 23 22:52:47 2014] [notice] Apache/2.2.22 (Ubuntu) PHP/5.3.10-1ubuntu3.13 with Suhosin-Patch configured -- resuming normal operations traffic information: cat access-2014-08-23.log | cut -d " " -f4 |cut -d":" -f2 |sort|uniq -c |sort -nr 5692 14 5291 15 5083 16 4723 23 4463 12 4057 17 4011 11 3926 13 3852 10 3187 05 3176 09 3055 06 2790 07 2672 00 2608 02 2591 01 2577 04 2514 03 2497 08 707 22 88 18 After I use "sudo apachectl restart", user can connect my website. So I want to know? What is the problem? And if "sudo apachectl restart" is needed, can I automate run this command? Today this kind struts appear again, and I run netstat -a -n Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 115.28.146.116:80 125.39.208.120:50708 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.158:50278 SYN_RECV tcp 0 0 115.28.146.116:80 220.173.142.152:23320 SYN_RECV tcp 0 0 115.28.146.116:80 60.173.247.132:52851 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.158:39397 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.158:56894 SYN_RECV tcp 0 0 115.28.146.116:80 183.129.174.2:21291 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.120:44499 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.120:34017 SYN_RECV tcp 0 0 115.28.146.116:80 124.65.50.210:3774 SYN_RECV tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:15770 0.0.0.0:* LISTEN tcp 1 0 115.28.146.116:80 14.127.65.219:61633 CLOSE_WAIT tcp 305 0 115.28.146.116:80 125.39.208.120:37593 ESTABLISHED tcp 0 0 10.144.142.201:52866 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:52873 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:52868 10.146.6.61:3306 TIME_WAIT tcp 343 0 115.28.146.116:80 182.118.20.215:50709 ESTABLISHED tcp 0 0 115.28.146.116:54784 173.194.127.243:80 ESTABLISHED tcp 1 0 115.28.146.116:80 116.192.2.185:41253 CLOSE_WAIT tcp 0 0 10.144.142.201:52876 10.146.6.61:3306 ESTABLISHED tcp 559 0 115.28.146.116:80 218.241.144.114:54501 ESTABLISHED tcp 376 0 115.28.146.116:80 116.213.196.119:50604 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59339 CLOSE_WAIT tcp 214 0 115.28.146.116:80 142.4.215.40:34443 ESTABLISHED tcp 0 0 115.28.146.116:48635 115.28.146.116:80 ESTABLISHED tcp 187 0 115.28.146.116:80 115.28.146.116:48635 ESTABLISHED tcp 0 0 10.144.142.201:52853 10.146.6.61:3306 TIME_WAIT tcp 594 0 115.28.146.116:80 183.129.174.2:7090 CLOSE_WAIT tcp 0 0 10.144.142.201:52874 10.146.6.61:3306 TIME_WAIT tcp 0 0 115.28.146.116:80 182.118.20.166:44081 TIME_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59028 CLOSE_WAIT tcp 1 0 115.28.146.116:80 14.127.65.219:61665 CLOSE_WAIT tcp 0 0 10.144.142.201:52860 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:46983 10.146.6.61:3306 ESTABLISHED tcp 0 2290 115.28.146.116:80 14.154.179.243:41049 FIN_WAIT1 tcp 0 0 10.144.142.201:42900 10.146.6.61:3306 ESTABLISHED tcp 571 0 115.28.146.116:80 220.173.142.152:23295 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59337 CLOSE_WAIT tcp 438 0 115.28.146.116:80 42.120.74.202:31567 CLOSE_WAIT tcp 0 0 115.28.146.116:80 113.36.238.28:59498 ESTABLISHED tcp 259 0 115.28.146.116:80 66.249.65.56:36739 ESTABLISHED tcp 0 0 115.28.146.116:80 113.36.238.28:59341 ESTABLISHED tcp 0 0 115.28.146.116:80 142.4.215.40:34267 FIN_WAIT2 tcp 799 0 115.28.146.116:80 180.173.88.1:52779 ESTABLISHED tcp 0 0 115.28.146.116:80 117.136.25.132:25207 FIN_WAIT2 tcp 0 0 115.28.146.116:80 220.181.108.186:42540 TIME_WAIT tcp 0 0 10.144.142.201:59902 10.242.174.13:80 TIME_WAIT tcp 0 1820 115.28.146.116:80 218.22.140.90:39266 LAST_ACK tcp 0 0 115.28.146.116:80 66.249.65.64:56977 TIME_WAIT tcp 669 0 115.28.146.116:80 83.251.90.61:49664 ESTABLISHED tcp 0 0 10.144.142.201:52872 10.146.6.61:3306 TIME_WAIT tcp 233 0 115.28.146.116:80 54.202.88.0:43398 CLOSE_WAIT tcp 479 0 115.28.146.116:80 65.49.44.149:25739 ESTABLISHED tcp 378 0 115.28.146.116:80 148.251.124.173:39313 CLOSE_WAIT tcp 1 0 115.28.146.116:80 14.127.65.219:61697 CLOSE_WAIT tcp 1 0 115.28.146.116:80 49.4.158.2:52986 CLOSE_WAIT tcp 769 0 115.28.146.116:80 14.127.65.219:61537 ESTABLISHED tcp 0 0 10.144.142.201:52859 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:55734 10.164.2.163:9200 TIME_WAIT tcp 563 0 115.28.146.116:80 202.55.20.10:22577 CLOSE_WAIT tcp 194 0 115.28.146.116:80 37.58.100.165:50908 CLOSE_WAIT tcp 791 0 115.28.146.116:80 116.192.2.185:45628 ESTABLISHED tcp 709 0 115.28.146.116:80 113.116.61.178:65209 ESTABLISHED tcp 706 0 115.28.146.116:80 183.227.44.237:54519 ESTABLISHED tcp 301 0 115.28.146.116:80 118.198.243.127:31180 ESTABLISHED tcp 0 0 10.144.142.201:55721 10.164.2.163:9200 TIME_WAIT tcp 0 0 10.144.142.201:55726 10.164.2.163:9200 TIME_WAIT tcp 0 0 10.144.142.201:55723 10.164.2.163:9200 TIME_WAIT tcp 681 0 115.28.146.116:80 83.251.90.61:49662 ESTABLISHED tcp 0 0 115.28.146.116:80 83.251.90.61:65274 TIME_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59022 CLOSE_WAIT tcp 1 0 115.28.146.116:80 180.173.88.1:52781 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59037 CLOSE_WAIT tcp 0 0 10.144.142.201:55728 10.164.2.163:9200 TIME_WAIT tcp 231 0 115.28.146.116:37596 110.75.102.62:80 CLOSE_WAIT tcp 1 0 115.28.146.116:80 14.127.65.219:61569 CLOSE_WAIT tcp 0 0 10.144.142.201:51310 10.146.6.61:3306 ESTABLISHED tcp 299 0 115.28.146.116:80 123.125.71.16:36281 ESTABLISHED tcp 0 0 115.28.146.116:48620 115.28.146.116:80 ESTABLISHED tcp 1 0 115.28.146.116:80 183.227.44.237:54520 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59026 CLOSE_WAIT tcp 479 0 115.28.146.116:80 65.49.44.149:5490 ESTABLISHED tcp 665 0 115.28.146.116:80 83.251.90.61:49663 ESTABLISHED tcp 0 0 115.28.146.116:53744 173.194.127.147:80 ESTABLISHED tcp 1 0 115.28.146.116:80 113.36.238.28:59023 CLOSE_WAIT tcp 0 0 115.28.146.116:22 116.192.2.185:34205 ESTABLISHED tcp 333 0 115.28.146.116:80 149.174.113.111:54338 CLOSE_WAIT tcp 0 0 10.144.142.201:52861 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:52863 10.146.6.61:3306 TIME_WAIT tcp 1 0 115.28.146.116:80 116.192.2.185:43272 CLOSE_WAIT tcp 767 0 115.28.146.116:80 49.4.158.2:52947 CLOSE_WAIT tcp 668 0 115.28.146.116:80 83.251.90.61:49665 ESTABLISHED tcp 642 0 115.28.146.116:80 222.78.185.50:55788 ESTABLISHED tcp 710 0 115.28.146.116:80 113.116.61.178:65264 ESTABLISHED tcp 284 0 115.28.146.116:80 157.55.39.243:65185 ESTABLISHED tcp 450 0 115.28.146.116:80 65.49.44.149:55496 ESTABLISHED tcp 1 0 115.28.146.116:80 116.192.2.185:36629 CLOSE_WAIT tcp 233 0 115.28.146.116:80 54.202.88.0:42424 CLOSE_WAIT tcp 187 0 115.28.146.116:80 115.28.146.116:48620 ESTABLISHED tcp 1 0 115.28.146.116:80 14.127.65.219:61601 CLOSE_WAIT tcp 776 0 115.28.146.116:80 202.118.253.102:64883 CLOSE_WAIT tcp 841 0 115.28.146.116:80 37.228.105.28:49472 ESTABLISHED tcp 787 0 115.28.146.116:80 112.65.226.198:52192 ESTABLISHED tcp 0 0 10.144.142.201:55717 10.164.2.163:9200 TIME_WAIT tcp 233 0 115.28.146.116:80 54.202.88.0:42855 CLOSE_WAIT tcp 379 0 115.28.146.116:80 101.226.166.219:2322 ESTABLISHED tcp 0 0 115.28.146.116:80 183.60.212.152:43063 CLOSE_WAIT tcp 1 0 115.28.146.116:80 180.173.88.1:52780 CLOSE_WAIT tcp 784 0 115.28.146.116:80 101.95.29.26:63094 ESTABLISHED tcp 463 0 115.28.146.116:80 65.49.44.149:53876 ESTABLISHED tcp 1 0 115.28.146.116:80 116.192.2.185:37946 CLOSE_WAIT tcp 479 0 115.28.146.116:80 65.49.44.149:41157 ESTABLISHED tcp 1 0 115.28.146.116:80 113.36.238.28:59036 CLOSE_WAIT tcp 1 0 115.28.146.116:80 49.4.158.2:52984 CLOSE_WAIT tcp 1 0 115.28.146.116:80 116.192.2.185:38100 CLOSE_WAIT tcp 0 0 10.144.142.201:52865 10.146.6.61:3306 TIME_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59027 CLOSE_WAIT tcp 0 0 115.28.146.116:36508 173.194.127.81:80 ESTABLISHED tcp 210 0 115.28.146.116:80 188.143.232.123:47775 ESTABLISHED tcp 1 0 115.28.146.116:80 113.36.238.28:59025 CLOSE_WAIT tcp 0 0 10.144.142.201:52857 10.146.6.61:3306 TIME_WAIT tcp 654 0 115.28.146.116:80 49.4.158.2:52985 ESTABLISHED tcp 0 0 115.28.146.116:58627 110.75.102.62:80 ESTABLISHED tcp 782 0 115.28.146.116:80 180.153.219.13:40293 ESTABLISHED tcp 792 0 115.28.146.116:80 116.192.2.185:48187 CLOSE_WAIT tcp6 0 0 :::22 :::* LISTEN udp 0 0 115.28.146.116:123 0.0.0.0:* udp 0 0 10.144.142.201:123 0.0.0.0:* udp 0 0 127.0.0.1:123 0.0.0.0:* udp 0 0 0.0.0.0:123 0.0.0.0:* udp6 0 0 :::123 :::* Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 8447 /var/run/mysqld/mysqld.sock unix 2 [ ACC ] SEQPACKET LISTENING 6678 /run/udev/control unix 2 [ ACC ] STREAM LISTENING 6482 @/com/ubuntu/upstart unix 2 [ ACC ] STREAM LISTENING 7543 /var/run/dbus/system_bus_socket unix 7 [ ] DGRAM 7551 /dev/log unix 2 [ ACC ] STREAM LISTENING 7650 /var/run/nscd/socket unix 2 [ ] DGRAM 7156424 unix 3 [ ] STREAM CONNECTED 7156137 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 7156136 unix 2 [ ] DGRAM 7156135 unix 2 [ ] DGRAM 7155834 unix 2 [ ] DGRAM 9734 unix 3 [ ] STREAM CONNECTED 9151 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 9150 unix 3 [ ] STREAM CONNECTED 9136 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 9135 unix 3 [ ] STREAM CONNECTED 9106 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 9105 unix 2 [ ] DGRAM 9073 unix 3 [ ] STREAM CONNECTED 7575 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 7574 unix 3 [ ] STREAM CONNECTED 7565 unix 3 [ ] STREAM CONNECTED 7564 unix 3 [ ] STREAM CONNECTED 7332 @/com/ubuntu/upstart unix 3 [ ] STREAM CONNECTED 7330 unix 3 [ ] DGRAM 6712 unix 3 [ ] DGRAM 6711 unix 3 [ ] STREAM CONNECTED 6662 @/com/ubuntu/upstart unix 3 [ ] STREAM CONNECTED 6635

    Read the article

  • I want to change DPI with Imagemagick without changing the actual byte-size of the image data

    - by user1694803
    I feel so horribly sorry that I have to ask this question here, but after hours of researching how to do an actually very simple task I'm still failing... In Gimp there is a very simple way to do what I want. I only have the German dialog installed but I'll try to translate it. I'm talking about going to "Picture-PrintingSize" and then adjusting the Values "X-Resolution" and "Y-Resolution" which are known to me as so called DPI values. You can also choose the format which by default is "Pixel/Inch". (In German the dialog is "Bild-Druckgröße" and there "X-Auflösung" and "Y-Auflösung") Ok, the values there are often "72" by default. When I change them to e.g. "300" this has the effect that the image stays the same on the computer, but if I print it, it will be smaller if you look at it, but all the details are still there, just smaller - it has a higher resolution on the printed paper (but smaller size... which is fine for me). I am often doing that when I am working with LaTeX, or to be exact with the command "pdflatex" on a recent Ubuntu-Machine. When I'm doing the above process with Gimp manually everything works just fine. The images will appear smaller in the resulting PDF but with high printing quality. What I am trying to do is to automate the process of going into Gimp and adjusting the DPI values. Since Imagemagick is known to be superb and I used it for many other tasks I tried to achieve my goal with this tool. But it does just not do what I want. After trying a lot of things I think this actually is be the command that should be my friend: convert input.png -density 300 output.png This should set the DPI to 300, as I can read everywhere in the web. It seems to work. When I check the file it stays the same. file input.png output.png input.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced output.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced When I use this command, it seems like it did what I wanted: identify -verbose output.png | grep 300 Resolution: 300x300 PNG:pHYs : x_res=300, y_res=300, units=0 (Funny enough, the same output comes for input.png which confuses me... so this might be the wrong parameters to watch?) But when I now render my TeX with "pdflatex" the image is still big and blurry. Also when I open the image with Gimp again the DPI values are set to "72" instead of "300". So there actually was no effect at all. Now what is the problem here. Am I getting something completely wrong? I can't be that wrong since everything works just fine with Gimp... Thanks for any help in this. I am also open to other automated solutions which are easily done on a Linux system...

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Yet Another ASP.NET MVC CRUD Tutorial

    - by Ricardo Peres
    I know that I have not posted much on MVC, mostly because I don’t use it on my daily life, but since I find it so interesting, and since it is gaining such popularity, I will be talking about it much more. This time, it’s about the most basic of scenarios: CRUD. Although there are several ASP.NET MVC tutorials out there that cover ordinary CRUD operations, I couldn’t find any that would explain how we can have also AJAX, optimistic concurrency control and validation, using Entity Framework Code First, so I set out to write one! I won’t go into explaining what is MVC, Code First or optimistic concurrency control, or AJAX, I assume you are all familiar with these concepts by now. Let’s consider an hypothetical use case, products. For simplicity, we only want to be able to either view a single product or edit this product. First, we need our model: 1: public class Product 2: { 3: public Product() 4: { 5: this.Details = new HashSet<OrderDetail>(); 6: } 7:  8: [Required] 9: [StringLength(50)] 10: public String Name 11: { 12: get; 13: set; 14: } 15:  16: [Key] 17: [ScaffoldColumn(false)] 18: [DatabaseGenerated(DatabaseGeneratedOption.Identity)] 19: public Int32 ProductId 20: { 21: get; 22: set; 23: } 24:  25: [Required] 26: [Range(1, 100)] 27: public Decimal Price 28: { 29: get; 30: set; 31: } 32:  33: public virtual ISet<OrderDetail> Details 34: { 35: get; 36: protected set; 37: } 38:  39: [Timestamp] 40: [ScaffoldColumn(false)] 41: public Byte[] RowVersion 42: { 43: get; 44: set; 45: } 46: } Keep in mind that this is a simple scenario. Let’s see what we have: A class Product, that maps to a product record on the database; A product has a required (RequiredAttribute) Name property which can contain up to 50 characters (StringLengthAttribute); The product’s Price must be a decimal value between 1 and 100 (RangeAttribute); It contains a set of order details, for each time that it has been ordered, which we will not talk about (Details); The record’s primary key (mapped to property ProductId) comes from a SQL Server IDENTITY column generated by the database (KeyAttribute, DatabaseGeneratedAttribute); The table uses a SQL Server ROWVERSION (previously known as TIMESTAMP) column for optimistic concurrency control mapped to property RowVersion (TimestampAttribute). Then we will need a controller for viewing product details, which will located on folder ~/Controllers under the name ProductController: 1: public class ProductController : Controller 2: { 3: [HttpGet] 4: public ViewResult Get(Int32 id = 0) 5: { 6: if (id != 0) 7: { 8: using (ProductContext ctx = new ProductContext()) 9: { 10: return (this.View("Single", ctx.Products.Find(id) ?? new Product())); 11: } 12: } 13: else 14: { 15: return (this.View("Single", new Product())); 16: } 17: } 18: } If the requested product does not exist, or one was not requested at all, one with default values will be returned. I am using a view named Single to display the product’s details, more on that later. As you can see, it delegates the loading of products to an Entity Framework context, which is defined as: 1: public class ProductContext: DbContext 2: { 3: public DbSet<Product> Products 4: { 5: get; 6: set; 7: } 8: } Like I said before, I’ll keep it simple for now, only aggregate root Product is available. The controller will use the standard routes defined by the Visual Studio ASP.NET MVC 3 template: 1: routes.MapRoute( 2: "Default", // Route name 3: "{controller}/{action}/{id}", // URL with parameters 4: new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults 5: ); Next, we need a view for displaying the product details, let’s call it Single, and have it located under ~/Views/Product: 1: <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<Product>" %> 2: <!DOCTYPE html> 3:  4: <html> 5: <head runat="server"> 6: <title>Product</title> 7: <script src="/Scripts/jquery-1.7.2.js" type="text/javascript"></script> 1:  2: <script src="/Scripts/jquery-ui-1.8.19.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.unobtrusive-ajax.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.unobtrusive.js" type="text/javascript"> 1: </script> 2: <script type="text/javascript"> 3: function onFailure(error) 4: { 5: } 6:  7: function onComplete(ctx) 8: { 9: } 10:  11: </script> 8: </head> 9: <body> 10: <div> 11: <% 1: : this.Html.ValidationSummary(false) %> 12: <% 1: using (this.Ajax.BeginForm("Edit", "Product", new AjaxOptions{ HttpMethod = FormMethod.Post.ToString(), OnSuccess = "onSuccess", OnFailure = "onFailure" })) { %> 13: <% 1: : this.Html.EditorForModel() %> 14: <input type="submit" name="submit" value="Submit" /> 15: <% 1: } %> 16: </div> 17: </body> 18: </html> Yes… I am using ASPX syntax… sorry about that!   I implemented an editor template for the Product class, which must be located on the ~/Views/Shared/EditorTemplates folder as file Product.ascx: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<Product>" %> 2: <div> 3: <%: this.Html.HiddenFor(model => model.ProductId) %> 4: <%: this.Html.HiddenFor(model => model.RowVersion) %> 5: <fieldset> 6: <legend>Product</legend> 7: <div class="editor-label"> 8: <%: this.Html.LabelFor(model => model.Name) %> 9: </div> 10: <div class="editor-field"> 11: <%: this.Html.TextBoxFor(model => model.Name) %> 12: <%: this.Html.ValidationMessageFor(model => model.Name) %> 13: </div> 14: <div class="editor-label"> 15: <%= this.Html.LabelFor(model => model.Price) %> 16: </div> 17: <div class="editor-field"> 18: <%= this.Html.TextBoxFor(model => model.Price) %> 19: <%: this.Html.ValidationMessageFor(model => model.Price) %> 20: </div> 21: </fieldset> 22: </div> One thing you’ll notice is, I am including both the ProductId and the RowVersion properties as hidden fields; they will come handy later or, so that we know what product and version we are editing. The other thing is the included JavaScript files: jQuery, jQuery UI and unobtrusive validations. Also, I am not using the Content extension method for translating relative URLs, because that way I would lose JavaScript intellisense for jQuery functions. OK, so, at this moment, I want to add support for AJAX and optimistic concurrency control. So I write a controller method like this: 1: [HttpPost] 2: [AjaxOnly] 3: [Authorize] 4: public JsonResult Edit(Product product) 5: { 6: if (this.TryValidateModel(product) == true) 7: { 8: using (BlogContext ctx = new BlogContext()) 9: { 10: Boolean success = false; 11:  12: ctx.Entry(product).State = (product.ProductId == 0) ? EntityState.Added : EntityState.Modified; 13:  14: try 15: { 16: success = (ctx.SaveChanges() == 1); 17: } 18: catch (DbUpdateConcurrencyException) 19: { 20: ctx.Entry(product).Reload(); 21: } 22:  23: return (this.Json(new { Success = success, ProductId = product.ProductId, RowVersion = Convert.ToBase64String(product.RowVersion) })); 24: } 25: } 26: else 27: { 28: return (this.Json(new { Success = false, ProductId = 0, RowVersion = String.Empty })); 29: } 30: } So, this method is only valid for HTTP POST requests (HttpPost), coming from AJAX (AjaxOnly, from MVC Futures), and from authenticated users (Authorize). It returns a JSON object, which is what you would normally use for AJAX requests, containing three properties: Success: a boolean flag; RowVersion: the current version of the ROWVERSION column as a Base-64 string; ProductId: the inserted product id, as coming from the database. If the product is new, it will be inserted into the database, and its primary key will be returned into the ProductId property. Success will be set to true; If a DbUpdateConcurrencyException occurs, it means that the value in the RowVersion property does not match the current ROWVERSION column value on the database, so the record must have been modified between the time that the page was loaded and the time we attempted to save the product. In this case, the controller just gets the new value from the database and returns it in the JSON object; Success will be false. Otherwise, it will be updated, and Success, ProductId and RowVersion will all have their values set accordingly. So let’s see how we can react to these situations on the client side. Specifically, we want to deal with these situations: The user is not logged in when the update/create request is made, perhaps the cookie expired; The optimistic concurrency check failed; All went well. So, let’s change our view: 1: <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<Product>" %> 2: <%@ Import Namespace="System.Web.Security" %> 3:  4: <!DOCTYPE html> 5:  6: <html> 7: <head runat="server"> 8: <title>Product</title> 9: <script src="/Scripts/jquery-1.7.2.js" type="text/javascript"></script> 1:  2: <script src="/Scripts/jquery-ui-1.8.19.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.unobtrusive-ajax.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.unobtrusive.js" type="text/javascript"> 1: </script> 2: <script type="text/javascript"> 3: function onFailure(error) 4: { 5: window.alert('An error occurred: ' + error); 6: } 7:  8: function onSuccess(ctx) 9: { 10: if (typeof (ctx.Success) != 'undefined') 11: { 12: $('input#ProductId').val(ctx.ProductId); 13: $('input#RowVersion').val(ctx.RowVersion); 14:  15: if (ctx.Success == false) 16: { 17: window.alert('An error occurred while updating the entity: it may have been modified by third parties. Please try again.'); 18: } 19: else 20: { 21: window.alert('Saved successfully'); 22: } 23: } 24: else 25: { 26: if (window.confirm('Not logged in. Login now?') == true) 27: { 28: document.location.href = '<%: FormsAuthentication.LoginUrl %>?ReturnURL=' + document.location.pathname; 29: } 30: } 31: } 32:  33: </script> 10: </head> 11: <body> 12: <div> 13: <% 1: : this.Html.ValidationSummary(false) %> 14: <% 1: using (this.Ajax.BeginForm("Edit", "Product", new AjaxOptions{ HttpMethod = FormMethod.Post.ToString(), OnSuccess = "onSuccess", OnFailure = "onFailure" })) { %> 15: <% 1: : this.Html.EditorForModel() %> 16: <input type="submit" name="submit" value="Submit" /> 17: <% 1: } %> 18: </div> 19: </body> 20: </html> The implementation of the onSuccess function first checks if the response contains a Success property, if not, the most likely cause is the request was redirected to the login page (using Forms Authentication), because it wasn’t authenticated, so we navigate there as well, keeping the reference to the current page. It then saves the current values of the ProductId and RowVersion properties to their respective hidden fields. They will be sent on each successive post and will be used in determining if the request is for adding a new product or to updating an existing one. The only thing missing is the ability to insert a new product, after inserting/editing an existing one, which can be easily achieved using this snippet: 1: <input type="button" value="New" onclick="$('input#ProductId').val('');$('input#RowVersion').val('');"/> And that’s it.

    Read the article

< Previous Page | 751 752 753 754 755 756 757 758 759 760 761 762  | Next Page >