Search Results

Search found 29508 results on 1181 pages for 'object initializers'.

Page 951/1181 | < Previous Page | 947 948 949 950 951 952 953 954 955 956 957 958  | Next Page >

  • FluentPath: a fluent wrapper around System.IO

    - by Bertrand Le Roy
    .NET is now more than eight years old, and some of its APIs got old with more grace than others. System.IO in particular has always been a little awkward. It’s mostly static method calls (Path.*, Directory.*, etc.) and some stateful classes (DirectoryInfo, FileInfo). In these APIs, paths are plain strings. Since .NET v1, lots of good things happened to C#: lambda expressions, extension methods, optional parameters to name just a few. Outside of .NET, other interesting things happened as well. For example, you might have heard about this JavaScript library that had some success introducing a fluent API to handle the hierarchical structure of the HTML DOM. You know? jQuery. Knowing all that, every time I need to use the stuff in System.IO, I cringe. So I thought I’d just build a more modern wrapper around it. I used a fluent API based on an essentially immutable Path type and an enumeration of such path objects. To achieve the fluent style, a healthy dose of lambda expressions is being used to act on the objects. Without further ado, here’s an example of what you can do with the new API. In that example, I’m using a Media Center extension that wants all video files to be in their own folder. For that, I need a small tool that creates directories for each video file and moves the files in there. Here’s the code for it: Path.Get(args[0]) .Select(p => p.Extension == ".avi" || p.Extension == ".m4v" || p.Extension == ".wmv" || p.Extension == ".mp4" || p.Extension == ".dvr-ms" || p.Extension == ".mpg" || p.Extension == ".mkv") .CreateDirectory(p => p.Parent .Combine(p.FileNameWithoutExtension)) .Previous() .Move(p => p.Parent .Combine(p.FileNameWithoutExtension) .Combine(p.FileName)); This code creates a Path object pointing at the path pointed to by the first command line argument of my executable. It then selects all video files. After that, it creates directories that have the same names as each of the files, but without their extension. The result of that operation is the set of created directories. We can now get back to the previous set using the Previous method, and finally we can move each of the files in the set to the corresponding freshly created directory, whose name is the combination of the parent directory and the filename without extension. The new fluent path library covers a fair part of what’s in System.IO in a single, convenient API. Check it out, I hope you’ll enjoy it. Suggestions are more than welcome. For example, should I make this its own project on CodePlex or is this informal style just OK? Anything missing that you’d like to see? Is there a specific example you’d like to see expressed with the new API? Bugs? The code can be downloaded from here (this is under a new BSD license): http://weblogs.asp.net/blogs/bleroy/Samples/FluentPath.zip

    Read the article

  • ASP.NET Error Handling: Creating an extension method to send error email

    - by Jalpesh P. Vadgama
    Error handling in asp.net required to handle any kind of error occurred. We all are using that in one or another scenario. But some errors are there which will occur in some specific scenario in production environment in this case We can’t show our programming errors to the End user. So we are going to put a error page over there or whatever best suited as per our requirement. But as a programmer we should know that error so we can track the scenario and we can solve that error or can handle error. In this kind of situation an Error Email comes handy. Whenever any occurs in system it will going to send error in our email. Here I am going to write a extension method which will send errors in email. From asp.net 3.5 or higher version of .NET framework  its provides a unique way to extend your classes. Here you can fine more information about extension method. So lets create extension method via implementing a static class like following. I am going to use same code for sending email via my Gmail account from here. Following is code for that. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Net.Mail; namespace Experiement { public static class MyExtension { public static void SendErrorEmail(this Exception ex) { MailMessage mailMessage = new MailMessage(new MailAddress("[email protected]") , new MailAddress("[email protected]")); mailMessage.Subject = "Exception Occured in your site"; mailMessage.IsBodyHtml = true; System.Text.StringBuilder errorMessage = new System.Text.StringBuilder(); errorMessage.AppendLine(string.Format("<B>{0}</B>:{1}<BR/>","Exception",ex.Message)); errorMessage.AppendLine(string.Format("<B>{0}</B>:{1}<BR/>", "Stack Trace", ex.StackTrace)); if (ex.InnerException != null) { errorMessage.AppendLine(string.Format("<B>{0}</B>:{1}<BR/>", " Inner Exception", ex.InnerException.Message)); errorMessage.AppendLine(string.Format("<B>{0}</B>:{1}<BR/>", "Inner Stack Trace", ex.InnerException.StackTrace)); } mailMessage.Body = errorMessage.ToString(); System.Net.NetworkCredential networkCredentials = new System.Net.NetworkCredential("[email protected]", "password"); SmtpClient smtpClient = new SmtpClient(); smtpClient.EnableSsl = true; smtpClient.UseDefaultCredentials = false; smtpClient.Credentials = networkCredentials; smtpClient.Host = "smtp.gmail.com"; smtpClient.Port = 587; smtpClient.Send(mailMessage); } } } After creating an extension method let us that extension method to handle error like following in page load event of page. using System; namespace Experiement { public partial class WebForm1 : System.Web.UI.Page { protected void Page_Load(object sender,System.EventArgs e) { try { throw new Exception("My custom Exception"); } catch (Exception ex) { ex.SendErrorEmail(); Response.Write(ex.Message); } } } } Now in above code I have generated custom exception for example but in production It can be any Exception. And you can see I have use ex.SendErrorEmail() function in catch block to send email. That’s it.. Now it will throw exception and you will email in your email box like below.   That’s its. It’s so simple…Stay tuned for more.. Happy programming.. Technorati Tags: Exception,Extension Mehtod,Error Handling,ASP.NET

    Read the article

  • Friday Fun: The Search For Wondla

    - by Asian Angel
    The best day of the week is finally here again, so it is time to have some fun while waiting to go home for the weekend. The game we have for you today takes you far into humanity’s future where you journey with Eva Nine in her quest to find other humans. Note: Today’s game comes with a double bonus! First, there is a sequel game that you can move on to once you have completed the first one. Second, there are three wallpapers available in multiple sizes for those who enjoy the characters and artwork presented in the game (see below). The Search For Wondla The object of the game is to find the differences between two similar looking images based on artwork from The Search For Wondla by Tony DiTerlizzi. Are you ready to join Eva Nine in her quest to find other humans in the future? Note: There is a version available for those who would like to play The Search For Wondla on their iPads! The first game has 28 levels of difference finding goodness for you to work through. Each level will list the minimum number of differences that you need to find to progress to the next level. If you need a hint along the way just click on the Shake or Reveal options at the bottom of the game play window. Get a level completed quickly enough and you get bonus points! There will also be differences in the images for individual levels each time you play the game, so have fun! Note: The second game has 12 levels to complete. To give you a good feel for the game we have covered the first six levels here and provided seven clues for each level (you are only required to find a minimum of five). Eva Nine viewing the holographic outdoor projections in the main hub of her living quarters… Eva Nine is in a grumpy mood as Muthr visits her at bedtime… Eva Nine in her secret hideaway visiting old “childhood friends” as she contemplates her recent survival test failure. Eva Nine viewing the entire set of floor plans for the underground sanctuary where she was born and has been growing up. Eva Nine’s escape to the surface as the underground sanctuary is attacked by the bounty hunter creature Besteel. Eva Nine on the surface for the first time in her young life. Will she be successful in her quest? There is only one way to find out! Play The Search For Wondla Part 1 Play The Search For Wondla Part 2 Bonus Content If you have enjoyed this game you can learn more about the book and download the three wallpapers shown here by visiting the link below! Note: The wallpapers come in the following sizes: 1024*768, 1280*800, 1280*1024, 1440*900, iPhone, iPhone4, and iPad (click on the Extras link at the bottom of the page). Visit the Search For Wondla Homepage Do you enjoy playing difference finding games? Then you will definitely want to have a look at another wonderful game that we have covered here: Friday Fun: Isis Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • Extracting the Date from a DateTime in Entity Framework 4 and LINQ

    - by Ken Cox [MVP]
    In my current ASP.NET 4 project, I’m displaying dates in a GridDateTimeColumn of Telerik’s ASP.NET Radgrid control. I don’t care about the time stuff, so my DataFormatString shows only the date bits: <telerik:GridDateTimeColumn FilterControlWidth="100px"   DataField="DateCreated" HeaderText="Created"    SortExpression="DateCreated" ReadOnly="True"    UniqueName="DateCreated" PickerType="DatePicker"    DataFormatString="{0:dd MMM yy}"> My problem was that I couldn’t get the built-in column filtering (it uses Telerik’s DatePicker control) to behave.  The DatePicker assumes that the time is 00:00:00 but the data would have times like 09:22:21. So, when you select a date and apply the EqualTo filter, you get no results. You would get results if all the time portions were 00:00:00. In essence, I wanted my Entity Framework query to give the DatePicker what it wanted… a Date without the Time portion. Fortunately, EF4 provides the TruncateTime  function. After you include Imports System.Data.Objects.EntityFunctions You’ll find that your EF queries will accept the TruncateTime function. Here’s my routine: Protected Sub RadGrid1_NeedDataSource _     (ByVal source As Object, _      ByVal e As Telerik.Web.UI.GridNeedDataSourceEventArgs) _     Handles RadGrid1.NeedDataSource     Dim ent As New OfficeBookDBEntities1     Dim TopBOMs = From t In ent.TopBom, i In ent.Items _                   Where t.BusActivityID = busActivityID _       And i.BusActivityID And t.ItemID = i.RecordID _       Order By t.DateUpdated Descending _       Select New With {.TopBomID = t.TopBomID, .ItemID = t.ItemID, _                        .PartNumber = i.PartNumber, _                        .Description = i.Description, .Notes = t.Notes, _                        .DateCreated = TruncateTime(t.DateCreated), _                        .DateUpdated = TruncateTime(t.DateUpdated)}     RadGrid1.DataSource = TopBOMs End Sub Now when I select March 14, 2011 on the DatePicker, the filter doesn’t stumble on time values that don’t make sense. Full Disclosure: Telerik gives me (and other developer MVPs) free copies of their suite.

    Read the article

  • Creating an HttpHandler to handle request of your own extension

    - by Jalpesh P. Vadgama
    I have already posted about http handler in details before some time here. Now let’s create an http handler which will handle my custom extension. For that we need to create a http handlers class which will implement Ihttphandler. As we are implementing IHttpHandler we need to implement one method called process request and another one is isReusable property. The process request function will handle all the request of my custom extension. so Here is the code for my http handler class. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; namespace Experiement { public class MyExtensionHandler:IHttpHandler { public MyExtensionHandler() { //Implement intialization here } bool IHttpHandler.IsReusable { get { return true; } } void IHttpHandler.ProcessRequest(HttpContext context) { string excuttablepath = context.Request.AppRelativeCurrentExecutionFilePath; if (excuttablepath.Contains("HelloWorld.dotnetjalps")) { Page page = new HelloWorld(); page.AppRelativeVirtualPath = context.Request.AppRelativeCurrentExecutionFilePath; page.ProcessRequest(context); } } } } Here in above code you can see that in process request function I am getting current executable path and then I am processing that page. Now Lets create a page with extension .dotnetjalps and then we will process this page with above created http handler. so let’s create it. It will create a page like following. Now let’s write some thing in page load Event like following. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace Experiement { public partial class HelloWorld : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World"); } } } Now we have to tell our web server that we want to process request from this .dotnetjalps extension through our custom http handler for that we need to add a tag in httphandler sections of web.config like following. <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <httpHandlers> <add verb="*" path="*.dotnetjalps" type="Experiement.MyExtensionHandler,Experiement"/> </httpHandlers> </system.web> </configuration> That’s it now run that page into browser and it will execute like following in browser That’s you.. Isn’t it cool.. Stay tuned for more.. Happy programming.. Technorati Tags: HttpHandler,ASP.NET,Extension

    Read the article

  • Where’s my MD.050?

    - by Dave Burke
    A question that I’m sometimes asked is “where’s my MD.050 in OUM?” For those not familiar with an MD.050, it serves the purpose of being a Functional Design Document (FDD) in one of Oracle’s legacy Methods. Functional Design Documents have existed for many years with their primary purpose being to describe the functional aspects of one or more components of an IT system, typically, a Custom Extension of some sort. So why don’t we have a direct replacement for the MD.050/FDD in OUM? In simple terms, the disadvantage of the MD.050/FDD approach is that it tends to lead practitioners into “Design mode” too early in the process. Whereas OUM encourages more emphasis on gathering, and describing the functional requirements of a system ahead of the formal Analysis and Design process. So that just means more work up front for the Business Analyst or Functional Consultants right? Well no…..the design of a solution, particularly when it involves a complex custom extension, does not necessarily take longer just because you put more thought into the functional requirements. In fact, one could argue the complete opposite, in that by putting more emphasis on clearly understanding the nuances of functionality requirements early in the process, then the overall time and cost incurred during the Analysis to Design process should be less. In short, as your understanding of requirements matures over time, it is far easier (and more cost effective) to update a document or a diagram, than to change lines of code. So how does that translate into Tasks and Work Products in OUM? Let us assume you have reached a point on a project where a Custom Extension is needed. One of the first things you should consider doing is creating a Use Case, and remember, a Use Case could be as simple as a few lines of text reflecting a “User Story”, or it could be what Cockburn1 describes a “fully dressed Use Case”. It is worth mentioned at this point the highly scalable nature of OUM in the sense that “documents” should not be produced just because that is the way we have always done things. Some projects may well be predicated upon a base of electronic documents, whilst other projects may take a much more Agile approach to describing functional requirements; through “User Stories” perhaps. In any event, it is quite common for a Custom Extension to involve the creation of several “components”, i.e. some new screens, an interface, a report etc. Therefore several Use Cases might be required, which in turn can then be assembled into a Use Case Package. Once you have the Use Cases attributed to an appropriate (fit-for-purpose) level of detail, and assembled into a Package, you can now create an Analysis Model for the Package. An Analysis Model is conceptual in nature, and depending on the solution being developing, would involve the creation of one or more diagrams (i.e. Sequence Diagrams, Collaboration Diagrams etc.) which collectively describe the Data, Behavior and Use Interface requirements of the solution. If required, the various elements of the Analysis Model may be indexed via an Analysis Specification. For Custom Extension projects that follow a pure Object Orientated approach, then the Analysis Model will naturally support the development of the Design Model without any further artifacts. However, for projects that are transitioning to this approach, then the various elements of the Analysis Model may be represented within the Analysis Specification. If we now return to the original question of “Where’s my MD.050”. The full answer would be: Capture the functional requirements within a Use Case Group related Use Cases into a Package Create an Analysis Model for each Package Consider creating an Analysis Specification (AN.100) as a index to each Analysis Model artifact An alternative answer for a relatively simple Custom Extension would be: Capture the functional requirements within a Use Case Optionally, group related Use Cases into a Package Create an Analysis Specification (AN.100) for each package 1 Cockburn, A, 2000, Writing Effective Use Case, Addison-Wesley Professional; Edition 1

    Read the article

  • Managing User & Role Security with Oracle SQL Developer

    - by thatjeffsmith
    With the advent of SQL Developer v3.0, users have had access to some powerful database administration features. Version 3.1 introduced more powerful features such as an interface to Data Pump and RMAN. Today I want to talk about some very simple but frequently ran tasks that SQL Developer can assist with, like: identifying privs granted to users managing role privs assigning new roles and privs to users & roles Before getting started, you’ll need a connection to the database with the proper privileges. The common ROLE used to accomplish this is the ‘DBA‘ role. Curious as to what the DBA role is actually comprised of? Let’s find out! Open the DBA Console First make sure you’re connected to the database you want to manage security on with a privileged administrator account. Then open the View menu and select ‘DBA.’ Accessing the DBA panel ‘Create’ a Connection Click on the green ‘+’ button in the DBA panel. It will ask you to choose a previously defined SQL Developer connection. Defining a DBA connection in Oracle SQL Developer Once connected you will see a tree list of DBA features you can start interacting with. Expand the ‘Security’ Tree Node As you click on an object in the DBA panel, the ‘viewer’ will open on the right-hand-side, just like you are accustomed to seeing when clicking on a table or stored procedure. Accessing the DBA role If I’m a newly hired Oracle DBA, the first thing I might want to do is become very familiar with the DBA role. People will be asking you to grant them this role or a subset of its privileges. Once you see what the role can do, you will become VERY protective of it. My favorite 3-letter 4-letter word is ‘ANY’ and the DBA role is littered with privileges like this: ANY TABLE privs granted to DBA role So if this doesn’t freak you out, then maybe you should re-consider your career path. Or in other words, don’t be granting this role to ANYONE you don’t completely trust to take care of your database. If I’m just assigned a new database to manage, the first thing I might want to look at is just WHO has been assigned the DBA role. SQL Developer makes this easy to ascertain, just click on the ‘User Grantees’ panel. Who has the keys to your car? Making Changes to Roles and Users If you mouse-right-click on a user in the Tree, you can do individual tasks like grant a sys priv or expire an account. But, you can also use the ‘Edit User’ dialog to do a lot of work in one pass. As you click through options in these dialogs, it will build the ‘ALTER USER’ script in the SQL panel, which can then be executed or copied to the worksheet or to your .SQL file to be ran at your discretion. A Few Clicks vs a Lot of Typing These dialogs won’t make you a DBA, but if you’re pressed for time and you’re already in SQL Developer, they can sure help you make up for lost time in just a few clicks!

    Read the article

  • Unable to build my c++ code with g++ 4.6.3

    - by Mriganka
    I am facing multiple issues with building my c++ code on Ubuntu 12.04. This code was building and running fine on RH Enterprise. I am using g++ 4.6.3. Here's the output of g++ -v. g++ -v Using built-in specs. COLLECT_GCC=g++ COLLECT_LTO_WRAPPER=/usr/lib/gcc/i686-linux-gnu/4.6/lto-wrapper Target: i686-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.6.3-1ubuntu5' --with-bugurl=file:///usr/share/doc/gcc-4.6/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.6 --enable-shared --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.6 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --enable-plugin --enable-objc-gc --enable-targets=all --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=i686-linux-gnu --host=i686-linux-gnu --target=i686-linux-gnu Thread model: posix gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) Here's a sample of my code: #include "Word.h" #include < string> using namespace std; pthread_mutex_t Word::_lock = PTHREAD_MUTEX_INITIALIZER; Word::Word(): _occurrences(1) { memset(_buf, 0, 25); } Word::Word(char *str): _occurrences(1) { memset(_buf, 0, 25); if (str != NULL) { strncpy(_buf, str, strlen(str)); } } g++ -c -ansi or g++ -c -std=c++98 or g++ -c -std=c++03, none of these options are able to build the code correctly. I get the following compilation errors: mriganka@ubuntu:~/WordCount$ make g++ -c -g -ansi Word.cpp -o Word.o Word.cpp: In constructor ‘Word::Word()’: Word.cpp:10:21: error: ‘memset’ was not declared in this scope Word.cpp: In constructor ‘Word::Word(char*)’: Word.cpp:16:21: error: ‘memset’ was not declared in this scope Word.cpp:19:34: error: ‘strlen’ was not declared in this scope Word.cpp:19:35: error: ‘strncpy’ was not declared in this scope Word.cpp: In member function ‘void Word::operator=(const Word&)’: Word.cpp:37:42: error: ‘strlen’ was not declared in this scope Word.cpp:37:43: error: ‘strncpy’ was not declared in this scope Word.cpp: In copy constructor ‘Word::Word(const Word&)’: Word.cpp:44:21: error: ‘memset’ was not declared in this scope Word.cpp:45:52: error: ‘strlen’ was not declared in this scope Word.cpp:45:53: error: ‘strncpy’ was not declared in this scope So basically g++ 4.6.3 on Ubuntu 12.04 is not able to recognize the standard c++ headers. And I am not finding a way out of this situation. Second problem: In order to make progress, I included < string.h instead of < string. But now I am facing linking errors with my message queue and pthread library functions. Here's the error that I am getting: mriganka@ubuntu:~/WordCount$ make g++ -c -g -ansi Word.cpp -o Word.o g++ -lrt -I/usr/lib/i386-linux-gnu Word.o HashMap.o main.o -o word_count main.o: In function `main': /home/mriganka/WordCount/main.cpp:75: undefined reference to `pthread_create' /home/mriganka/WordCount/main.cpp:90: undefined reference to `mq_open' /home/mriganka/WordCount/main.cpp:93: undefined reference to `mq_getattr' /home/mriganka/WordCount/main.cpp:113: undefined reference to `mq_send' /home/mriganka/WordCount/main.cpp:123: undefined reference to `pthread_join' /home/mriganka/WordCount/main.cpp:129: undefined reference to `mq_close' /home/mriganka/WordCount/main.cpp:130: undefined reference to `mq_unlink' main.o: In function `count_words(void*)': /home/mriganka/WordCount/main.cpp:151: undefined reference to `mq_open' /home/mriganka/WordCount/main.cpp:154: undefined reference to `mq_getattr' /home/mriganka/WordCount/main.cpp:162: undefined reference to `mq_timedreceive' collect2: ld returned 1 exit status Here's my makefile: CC=g++ CFLAGS=-c -g -ansi LDFLAGS=-lrt INC=-I/usr/lib/i386-linux-gnu SOURCES=Word.cpp HashMap.cpp main.cpp OBJECTS=$(SOURCES:.cpp=.o) EXECUTABLE=word_count all: $(SOURCES) $(EXECUTABLE) $(EXECUTABLE): $(OBJECTS) $(CC) $(LDFLAGS) $(INC) -pthread $(OBJECTS) -o $@ .cpp.o: $(CC) $(CFLAGS) $< -o $@ clean: rm -f *.o word_count Please help me to resolve both the issues. I searched online relentlessly for any solution of these problems, but no one seems to have encountered these issues.

    Read the article

  • New security options in UCM Patch Set 3

    - by kyle.hatlestad
    While the Patch Set 3 (PS3) release was mostly focused on bug fixes and such, some new features sneaked in there. One of those new features is to the security options. In 10gR3 and prior versions, UCM had a component called Collaboration Manager which allowed for project folders to be created and groups of users assigned as members to collaborate on documents. With this component came access control lists (ACL) for content and folders. Users could assign specific security rights on each and every document and folder within a project. And it was even possible to enable these ACL's without having the Collaboration Manager component enabled (see technote# 603148.1). When 11g came out, Collaboration Manager was no longer available. But the configuration settings to turn on ACLs were still there. Well, in PS3 they're implemented slightly differently. And there is a new component available which adds an additional dimension to define security on the object, Roles. So now instead of selecting individual users or groups of users (defined as an Alias in User Admin), you can select a particular role. And if a user has that role, they are granted that level of access. This can allow for a much more flexible and manageable security model instead of trying to manage with just user and group access as people come and go in the organization. The way that it is enabled is still through configuration entries. First log in as an administrator and go to Administration -> Admin Server. On the Component Manager page, click the 'advanced component manager' link in the description paragraph at the top. In the list of Disabled Components, enable the RoleEntityACL component. Then click the General Configuration link on the left. In the Additional Configuration Variables text area, enter the new configuration values: UseEntitySecurity=true SpecialAuthGroups=<comma separated list of Security Groups to honor ACLs> The SpecialAuthGroups should be a list of Security Groups that honor the ACL fields. If an ACL is applied to a content item with a Security Group outside this list, it will be ignored. Save the settings and restart the instance. Upon restart, three new metadata fields will be created: xClbraUserList, xClbraAliasList, xClbraRoleList. If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. On the Check In, Search, and Update pages, values are added by simply typing in the value and getting a type-ahead list of possible values. Select the value, click Add and then set the level of access (Read, Write, Delete, or Admin). If all of the fields are blank, then it simply falls back to just Security Group and Account access. For Users and Groups, these values are automatically picked up from the corresponding database tables. In the case of Roles, this is an explicitly defined list of choices that are made available. These values must match the role that is being defined from WebLogic Server or you LDAP/AD repository. To add these values, go to Administration -> Admin Applets -> Configuration Manager. On the Views tab, edit the values for the ExternalRolesView. By default, 'guest' and 'authenticated' are added. Once added to through the view, they will be available to select from for the Roles Access List. As for how they are stored in the metadata fields, each entry starts with it's identifier: ampersand (&) symbol for users, "at" (@) symbol for groups, and colon (:) for roles. Following that is the entity name. And at the end is the level of access in paranthesis. e.g. (RWDA). And each entry is separated by a comma. So if you were populating values through batch loader or an external source, the values would be defined this way. Detailed information on Access Control Lists can be found in the Oracle Fusion Middleware System Administrator's Guide for Oracle Content Server.

    Read the article

  • Grouping data in LINQ with the help of group keyword

    - by vik20000in
    While working with any kind of advanced query grouping is a very important factor. Grouping helps in executing special function like sum, max average etc to be performed on certain groups of data inside the date result set. Grouping is done with the help of the Group method. Below is an example of the basic group functionality.     int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 };         var numberGroups =         from num in numbers         group num by num % 5 into numGroup         select new { Remainder = numGroup.Key, Numbers = numGroup };  In the above example we have grouped the values based on the reminder left over when divided by 5. First we are grouping the values based on the reminder when divided by 5 into the numgroup variable.  numGroup.Key gives the value of the key on which the grouping has been applied. And the numGroup itself contains all the records that are contained in that group. Below is another example to explain the same. string[] words = { "blueberry", "abacus", "banana", "apple", "cheese" };         var wordGroups =         from num in words         group num by num[0] into grp         select new { FirstLetter = grp.Key, Words = grp }; In the above example we are grouping the value with the first character of the string (num[0]). Just like the order operator the group by clause also allows us to write our own logic for the Equal comparison (That means we can group Item by ignoring case also by writing out own implementation). For this we need to pass an object that implements the IEqualityComparer<string> interface. Below is an example. public class AnagramEqualityComparer : IEqualityComparer<string> {     public bool Equals(string x, string y) {         return getCanonicalString(x) == getCanonicalString(y);     }      public int GetHashCode(string obj) {         return getCanonicalString(obj).GetHashCode();     }         private string getCanonicalString(string word) {         char[] wordChars = word.ToCharArray();         Array.Sort<char>(wordChars);         return new string(wordChars);     } }  string[] anagrams = {"from   ", " salt", " earn", "  last   ", " near "}; var orderGroups = anagrams.GroupBy(w => w.Trim(), new AnagramEqualityComparer()); Vikram  

    Read the article

  • NServiceBus Generic Host and mqsvc.exe high CPU

    - by Michael Stephenson
    We have been doing some work with NServiceBus recently and observed some unusual behaviour which was caused by our mistake and seemed worthy of a small post.   The Scenario In our solution we were doing some standard NServiceBus stuff by pushing a message to a queue using NServiceBus.  We had a direct send/receive scenario rather than a publish/subscribe one.   The background process which was meant to collect the message and then process it was a normal NServiceBus message handler.  We would run the NServiceBus.Host.exe which would find the handler and then do the usual NServiceBus magic.   The Problem In this solution we were creating some automated tests around this module of the integration process to ensure that it would work well.  We had two tests.   Test 1 This test would start NServiceBus.Host.exe using the Process object, then seed a message to the queue via our web service façade sitting above the queue which wrapped NServiceBus.  The background process would then process the message and the test would check the message had been processed fine.   If all was well then the NServiceBus.Host.exe process was stopped.   Test 2 In test 2 we would do a very similar thing except that instead of starting the process the test would install NServiceBus.Host.exe as a windows service and then start the service before the test and once the test was executed it would stop the test.   The Results of the Tests Test 1 worked really well, however in test 2 we found that it didn’t really work at all, instead of doing the background process we were finding that between mqsvc.exe and NServiceBus.Host.exe the CPU on the machine was maxed and nothing was really happening.   The Solution After trying a few things we found it was the permissions on the queue were not set correctly.  Once this was resolved it all worked fine and CPU was not excessive and ran just like the console application.   I think the couple of take aways from this are:   Make sure you set the windows service for NserviceBus Generic Host to the right credentials When you install the generic host as a windows service then by default it will use the default windows credentials.  For any production like scenario you should be using a domain account to run the process as via the windows service. Make sure you have the queue set with the right permissions For the credentials you have used to configure the generic host as a windows service you should ensure that this user has the appropriate permissions for any queues it will interact with. Make sure you turn on the right logging configuration in NServiceBus When this wasnt working correctly we didnt know there was an issue, we were just experiencing the high CPU condition.  I am a little surprised that there wasnt something logged and that the process didnt crash.  I guess this could be by design bearing in mind that the process could be monitoring many queues.  In this point Im just saying that originally we didnt have all of the log4net logging which is available from NServiceBus turned on.  Its probably a good idea to have this turned on and configured until you are happy your solution is working fine.   Thanks to Ahmed Hashmi on my team who got this working in the end.

    Read the article

  • Rendering Linear Gradients using the HTML5 Canvas

    - by dwahlin
    Related HTML5 Canvas Posts: Getting Started with the HTML5 Canvas Rendering Text with the HTML5 Canvas Creating a Line Chart using the HTML5 Canvas New Pluralsight Course: HTML5 Canvas Fundamentals Gradients are everywhere. They’re used to enhance toolbars or buttons and help add additional flare to a web page when used appropriately. In the past we’ve always had to rely on images to render gradients which works well, but isn’t necessarily the most efficient (although 1 pixel wide images do work well). CSS3 provides a great way to render gradients in modern browsers (see http://www.colorzilla.com/gradient-editor for a nice online gradient generator tool) but it’s not the only option. If you’re working with charts, games, multimedia or other HTML5 Canvas applications you can also use gradients and render them on the client-side without relying on images. In this post I’ll introduce how to use linear gradients and discuss the different functions that can be used to create them.   Creating Linear Gradients Linear gradients can be created using the 2D context’s createLinearGradient function. The function takes the starting x,y coordinates and ending x,y coordinates of the gradient:   createLinearGradient(x1, y1, x2, y2);   By changing the start and end coordinates you can control the direction that the gradient renders. For example, adding the following coordinates causes the gradient to render from left to right since the y value stays at 0 for both points while the x value changes from 0 to 200. var lgrad = ctx.createLinearGradient(0, 0, 200, 0); Here’s an example of how changing the coordinates affects the gradient direction:   Once a linear gradient object has been created you can set color stops using the addColorStop() function. It takes the location where the color should appear in the gradient with 0 being the beginning and 1 being at the end (0.5 would be in the middle) as well as the color to display in the gradient. lgrad.addColorStop(0, 'white'); lgrad.addColorStop(1, 'gray');   An example of combining createLinearGradient() with addColorStop() is shown next:   Using createLinearGradient() var canvas = document.getElementById('myCanvas'); var ctx = canvas.getContext('2d'); var lgrad = ctx.createLinearGradient(0, 0, 200, 0); lgrad.addColorStop(0, 'white'); lgrad.addColorStop(1, 'gray'); ctx.fillStyle = lgrad; ctx.fillRect(0, 0, 200, 200); ctx.strokeRect(0, 0, 200, 200); This code renders a white to gray gradient as shown next: A live example of using createLinearGradient() is shown next. Click the Result tab to see the code in action.   In the next post on the HTML5 Canvas I’ll take a look at radial gradients and how they can be used. In the meantime, if you’re interested in learning more about the HTML5 Canvas and how it can be used in your Web or Windows 8 applications, check out my HTML5 Canvas Fundamentals course from Pluralsight. It has over 4 1/2 hours of canvas goodness packed in it.

    Read the article

  • What’s the use of code reuse?

    - by Tony Davis
    All great developers write reusable code, don’t they? Well, maybe, but as with all statements regarding what “great” developers do or don’t do, it’s probably an over-simplification. A novice programmer, in particular, will encounter in the literature a general assumption of the importance of code reusability. They spend time worrying about DRY (don’t repeat yourself), moving logic into specific “helper” modules that they can then reuse, agonizing about the minutiae of the class structure, inheritance and interface design that will promote easy reuse. Unfortunately, writing code specifically for reuse often leads to complicated object hierarchies and inheritance models that are anything but reusable. If, instead, one strives to write simple code units that are highly maintainable and perform a single function, in a concise, isolated fashion then the potential for reuse simply “drops out” as a natural by-product. Programmers, of course, care about these principles, about encapsulation and clean interfaces that don’t expose inner workings and allow easy pluggability. This is great when it helps with the maintenance and development of code but how often, in practice, do we actually reuse our code? Most DBAs and database developers are familiar with the practical reasons for the limited opportunities to reuse database code and its potential downsides. However, surely elsewhere in our code base, reuse happens often. After all, we can all name examples, such as date/time handling modules, which if we write with enough care we can plug in to many places. I spoke to a developer just yesterday who looked me in the eye and told me that in 30+ years as a developer (a successful one, I’d add), he’d never once reused his own code. As I sat blinking in disbelief, he explained that, of course, he always thought he would reuse it. He’d often agonized over its design, certain that he was creating code of great significance that he and other generations would reuse, with grateful tears misting their eyes. In fact, it never happened. He had in his head, most of the algorithms he needed and would simply write the code from scratch each time, refining the algorithms and tailoring the code to meet the specific requirements. It was, he said, simply quicker to do that than dig out the old code, check it, correct the mistakes, and adapt it. Is this a common experience, or just a strange anomaly? Viewed in a certain light, building code with a focus on reusability seems to hark to a past age where people built cars and music systems with the idea that someone else could and would replace and reuse the parts. Technology advances so rapidly that the next time you need the “same” code, it’s likely a new technique, or a whole new language, has emerged in the meantime, better equipped to tackle the task. Maybe we should be less fearful of the idea that we could write code well suited to the system requirements, but with little regard for reuse potential, and then rewrite a better version from scratch the next time.

    Read the article

  • Using the Katana Authentication handlers with NancyFx

    - by cibrax
    Once you write an OWIN Middleware service, it can be reused everywhere as long as OWIN is supported. In my last post, I discussed how you could write an Authentication Handler in Katana for Hawk (HMAC Authentication). Good news is NancyFx can be run as an OWIN handler, so you can use many of existing middleware services, including the ones that are ship with Katana. Running NancyFx as a OWIN handler is pretty straightforward, and discussed in detail as part of the NancyFx documentation here. After run the steps described there and you have the application working, only a few more steps are required to register the additional middleware services. The example bellow shows how the Startup class is modified to include Hawk authentication. public class Startup { public void Configuration(IAppBuilder app) { app.UseHawkAuthentication(new HawkAuthenticationOptions { Credentials = (id) => { return new HawkCredential { Id = "dh37fgj492je", Key = "werxhqb98rpaxn39848xrunpaw3489ruxnpa98w4rxn", Algorithm = "hmacsha256", User = "steve" }; } }); app.UseNancy(); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This code registers the Hawk Authentication Handler on top of the OWIN pipeline, so it will try to authenticate the calls before the request messages are passed over to NancyFx. The authentication handlers in Katana set the user principal in the OWIN environment using the key “server.User”. The following code shows how you can get that principal in a NancyFx module, public class HomeModule : NancyModule { public HomeModule() { Get["/"] = x => { var env = (IDictionary<string, object>)Context.Items[NancyOwinHost.RequestEnvironmentKey]; if (!env.ContainsKey("server.User") || env["server.User"] == null) { return HttpStatusCode.Unauthorized; } var identity = (ClaimsPrincipal)env["server.User"]; return "Hello " + identity.Identity.Name; }; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Thanks to OWIN, you don’t know any details of how these cross cutting concerns can be implemented in every possible web application framework.

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 3)

    - by Sadequl Hussain
    Continuing from Part 2 of the Database Migration Checklist series: Step 10: Full-text catalogs and full-text indexing This is one area of SQL Server where people do not seem to take notice unless something goes wrong. Full-text functionality is a specialised area in database application development and is not usually implemented in your everyday OLTP systems. Nevertheless, if you are migrating a database that uses full-text indexing on one or more tables, you need to be aware a few points. First of all, SQL Server 2005 now allows full-text catalog files to be restored or attached along with the rest of the database. However, after migration, if you are unable to look at the properties of any full-text catalogs, you are probably better off dropping and recreating it. You may also get the following error messages along the way: Msg 9954, Level 16, State 2, Line 1 The Full-Text Service (msftesql) is disabled. The system administrator must enable this service. This basically means full text service is not running (disabled or stopped) in the destination instance. You will need to start it from the Configuration Manager. Similarly, if you get the following message, you will also need to drop and recreate the catalog and populate it. Msg 7624, Level 16, State 1, Line 1 Full-text catalog ‘catalog_name‘ is in an unusable state. Drop and re-create this full-text catalog. A full population of full-text indexes can be a time and resource intensive operation. Obviously you will want to schedule it for low usage hours if the database is restored in an existing production server. Also, bear in mind that any scheduled job that existed in the source server for populating the full text catalog (e.g. nightly process for incremental update) will need to be re-created in the destination. Step 11: Database collation considerations Another sticky area to consider during a migration is the collation setting. Ideally you would want to restore or attach the database in a SQL Server instance with the same collation. Although not used commonly, SQL Server allows you to change a database’s collation by using the ALTER DATABASE command: ALTER DATABASE database_name COLLATE collation_name You should not be using this command for no reason as it can get really dangerous.  When you change the database collation, it does not change the collation of the existing user table columns.  However the columns of every new table, every new UDT and subsequently created variables or parameters in code will use the new setting. The collation of every char, nchar, varchar, nvarchar, text or ntext field of the system tables will also be changed. Stored procedure and function parameters will be changed to the new collation and finally, every character-based system data type and user defined data types will also be affected. And the change may not be successful either if there are dependent objects involved. You may get one or multiple messages like the following: Cannot ALTER ‘object_name‘ because it is being referenced by object ‘dependent_object_name‘. That is why it is important to test and check for collation related issues. Collation also affects queries that use comparisons of character-based data.  If errors arise due to two sides of a comparison being in different collation orders, the COLLATE keyword can be used to cast one side to the same collation as the other. Continues…

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 23 (sys.dm_db_index_usage_stats)

    - by Tamarick Hill
    The sys.dm_db_index_usage_stats Dynamic Management View is used to return usage information about the various indexes on your SQL Server instance. Let’s have a look at this DMV against our AdventureWorks2012 database so we can examine the information returned. SELECT * FROM sys.dm_db_index_usage_stats WHERE database_id = db_id('AdventureWorks2012') The first three columns in the result set represent the database_id, object_id, and index_id of a given row. You can join these columns back to other system tables to extract the actual database, object, and index names. The next four columns are probably the most beneficial columns within this DMV. First, the user_seeks column represents the number of times that a user query caused a seek operation against a particular index. The user_scans column represents how many times a user query caused a scan operation on a particular index. The user_lookups column represents how many times an index was used to perform a lookup operation. The user_updates column refers to how many times an index had to be updated due to a write operation that effected a particular index. The last_user_seek, last_user_scan, last_user_lookup, and last_user_update columns provide you with DATETIME information about when the last user scan, seek, lookup, or update operation was performed. The remaining columns in the result set are the same as the ones we previously discussed, except instead of the various operations being generated from user requests, they are generated from system background requests. This is an extremely useful DMV and one of my favorites when it comes to Index Maintenance. As we all know, indexes are extremely beneficial with improving the performance of your read operations. But indexes do have a downside as well. Indexes slow down the performance of your write operations, and they also require additional resources for storage. For this reason, in my opinion, it is important to regularly analyze the indexes on your system to make sure the indexes you have are being used efficiently. My AdventureWorks2012 database is only used for demonstrating or testing things, so I dont have a lot of meaningful information here, but for a Production system, if you see an index that is never getting any seeks, scans, or lookups, but is constantly getting a ton of updates, it more than likely would be a good candidate for you to consider removing. You would not be getting much benefit from the index, but yet it is incurring a cost on your system due to it constantly having to be updated for your write operations, not to mention the additional storage it is consuming. You should regularly analyze your indexes to ensure you keep your database systems as efficient and lean as possible. One thing to note is that these DMV statistics are reset every time SQL Server is restarted. Therefore it would not be a wise idea to make decisions about removing indexes after a Server Reboot or a cluster roll. If you restart your SQL Server instances frequently, for example if you schedule weekly/monthly cluster rolls, then you may not capture indexes that are being used for weekly/monthly reports that run for business users. And if you remove them, you may have some upset people at your desk on Monday morning. If you would like to begin analyzing your indexes to possibly remove the ones that your system is not using, I would recommend building a process to load this DMV information into a table on scheduled basis, depending on how frequently you perform an operation that would reset these statistics, then you can analyze the data over a period of time to get a more accurate view of what indexes are really being used and which ones or not. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188755.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Silverlight Cream for May 15, 2010 -- #862

    - by Dave Campbell
    In this Issue: Victor Gaudioso, Antoni Dol(-2-), Brian Genisio, Shawn Wildermuth, Mike Snow, Phil Middlemiss, Pete Brown, Kirupa, Dan Wahlin, Glenn Block, Jeff Prosise, Anoop Madhusudanan, and Adam Kinney. Shoutouts: Victor Gaudioso would like you to Checkout my Interview with Microsoft’s Murray Gordon at MIX 10 Pete Brown announced: Connected Show Podcast #29 With … Me! From SilverlightCream.com: New Silverlight Video Tutorial: How to Create Fast Forward for the MediaElement Victor Gaudioso's latest video tutorial is on creating the ability to fast-forward a MediaElement... check it out in the tutorial player itself! Overlapping TabItems with the Silverlight Toolkit TabControl Antoni Dol has a very cool tutorial up on the Toolkit TabItems control... not only is he overlapping them quite nicely but this is a very cool tutorial... QuoteFloat: Animating TextBlock PlaneProjections for a spiraling effect in Silverlight Antoni Dol also has a Blend tutorial up on animating TextBlock items... run the demo and you'll want to read the rest :) Adventures in MVVM – My ViewModel Base – Silverlight Support! Brian Genisio continues his MVVM tutorials with this update on his ViewModel base using some new C# 4.0 features, and fully supports Silverlight and WPF My Thoughts on the Windows Phone 7 Shawn Wildermuth gives his take on WP7. He included a port of his XBoxGames app to WP7 ... thanks Shawn! Silverlight Tip of the Day #20 – Using Tooltips in Silverlight I figured Mike Snow was going to overrun me with tips since I have missed a couple days, but there's only one! ... and it's on Tooltips. Animating the Silverlight opacity mask Phil Middlemiss has an article at SilverZine describing a Behavior he wrote (and is sharing) that turns a FrameworkElement into an opacity mask for it's parent container... cool demo on the page too. Breaking Apart the Margin Property in Xaml for better Binding Pete Brown dug in on a Twitter message and put some thoughts down about breaking a Margin apart to see about binding to the individual elements. Building a Simple Windows Phone App Kirupa has a 6-part tutorial up on building not-your-typical first WP7 application... all good stuff! Integrating HTML into Silverlight Applications Dan Wahlin has a post up discussing three ways to display HTML inside a Silverlight app. Hello MEF in Silverlight 4 and VB! (with an MVVM Light cameo) Glenn Block has a post up discussing MEF, MVVM, and it's in VB this time... and it's actually a great tutorial top to bottom... all source included of course :) Understanding Input Scope in Silverlight for Windows Phone Jeff Prosise has a good post up on the WP7 SIP and how to set the proper InputScope to get the SIP you want. Thinking about Silverlight ‘desktop’ apps – Creating a Standalone Installer for offline installation (no browser) Anoop Madhusudanan is discussing something that's been floating around for a while... installing Silverlight from, say, a CD or DVD when someone installs your app. He's got some good code, but be sure to read Tim Heuer and Scott Guthrie's comments, and consider digging deeper into that part. Using FluidMoveBehavior to animate grid coordinates in Silverlight Adam Kinney has a cool post up on animating an object using the FluidMotionBehavior of Blend 4... looks great moving across a checkerboard... check out the demo, then grab the code. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • SQL SERVER – How to Compare the Schema of Two Databases with Schema Compare

    - by Pinal Dave
    Earlier I wrote about An Efficiency Tool to Compare and Synchronize SQL Server Databases and it was very much well received. Since the blog post I have received quite a many question that just like data how we can also compare schema and synchronize it. If you think about comparing the schema manually, it is almost impossible to do so. Table Schema has been just one of the concept but if you really want the all the schema of the database (triggers, views, stored procedure and everything else) it is just impossible task. If you are developer or database administrator who works in the production environment than you know that there are so many different occasions when we have to compare schema of the database. Before deploying any changes to the production server, I personally like to make note of the every single schema change and document it so in case of any issue , I can always go back and refer my documentation. As discussed earlier it is absolutely impossible to do this task without the help of third party tools. I personally use Devart Schema Compare for this task. This is an extremely easy tool. Let us see how it works. First I have two different databases – a) AdventureWorks2012 and b) AdventureWorks2012-V1. There are total three changes between these databases. Here is the list of the same. One of the table has additional column One of the table have new index One of the stored procedure is changed Now let see how dbForge Schema Compare works in this scenario. First open dbForge Schema Compare studio. Click on New Schema Comparison. It will bring you to following screen where we have to configure the database needed to configure. I have selected AdventureWorks2012 and AdventureWorks-V1 databases. In the next screen we can verify various options but for this demonstration we will keep it as it is. We will not change anything in schema mapping screen as in our case it is not required but generically if you are comparing across schema you may need this. This is the most important screen as on this screen we select which kind of object we want to compare. You can see the options which are available to select. The screen lets you select the objects from SQL Server 2000 to SQL Server 2012. Once you click on compare in previous screen it will bring you to this screen, which will essentially display the comparative difference between two of the databases which we had selected in earlier screen. As mentioned above there are three different changes in the database and the same has been listed over here. Two of the changes belongs to the tables and one changes belong to the procedure. Let us click each of them one by one to see what is the difference between them. In very first option we can see that there is an additional column in another database which did not exist earlier. In this example we can see that AdventureWorks2012 database have an additional index. Following example is very interesting as in this case, we have changed the definition of the stored procedure and the result pan contains the same. dbForget Schema Compare very effectively identify the changes in schema and lists them neatly to developers. Here is one more screen. This software not only compares the schema but also provides the options to update or drop them as per the choice. I think this is brilliant option. Well, I have been using schema compare for quite a while and have found it very useful. Here are few of the things which dbForge Schema Compare can do for developers and DBAs. Compare and synchronize SQL Server database schemas Compare schemas of live database and SQL Server backup Generate comparison reports in Excel and HTML formats Eliminate mistakes in schema changes propagation across environments Track production database changes and customizations Automate migration of schema changes using command line interface I suggest that you try out dbForge Schema Compare and let me know what you think of this product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL

    Read the article

  • Using progress dialog in Visual Studio extensions

    - by Utkarsh Shigihalli
    Originally posted on: http://geekswithblogs.net/onlyutkarsh/archive/2014/05/23/using-progress-dialog-in-visual-studio-extensions.aspxAs a Visual Studio extension developer you are required to keep the aesthetics of Visual Studio in tact when you integrate your extension with Visual Studio. Your extension looks odd when you try to use windows controls and dialogs in your extensions. Visual Studio SDK exposes many interfaces so that your extension looks as integrated with Visual Studio as possible. When your extension is performing a long running task, you have many options to notify the progress to the user. One such option is through Visual Studio status bar. I have previously blogged about displaying progress through Visual Studio status bar. In this blog post I am going to highlight another way using IVsThreadedWaitDialog2 interface. One thing to note is, as the IVsThreadedWaitDialog2 interface name suggests it is a dialog hence user cannot perform any action when the dialog is being shown. So Visual Studio seems responsive to user, even when a task is being performed. Visual Studio itself makes use of this interface heavily. One example is when you are loading a solution (.sln) with lot of projects Visual Studio displays dialog implemented by this interface (screenshot below). So the first step is to get the instance of IVsThreadedWaitDialog2 interface using IServiceProvider interface. var dialogFactory = _serviceProvider.GetService(typeof(SVsThreadedWaitDialogFactory)) as IVsThreadedWaitDialogFactory; IVsThreadedWaitDialog2 dialog = null; if (dialogFactory != null) { dialogFactory.CreateInstance(out dialog); } So if your have the package initialized properly out object dialog will be not null and would contain the instance of IVsThreadedWaitDialog2 interface. Once the instance is got, you call the different methods to manage the dialog. I will cover 3 methods StartWaitDialog, EndWaitDialog and HasCanceled in this blog post. You show the progress dialog as below. if (dialog != null && dialog.StartWaitDialog( "Threaded Wait Dialog", "VS is Busy", "Progress text", null, "Waiting status bar text", 0, false, true) == VSConstants.S_OK) { Thread.Sleep(4000); } As you can see from the method syntax it is very similar to standard windows message box. If you pass true to the 7th parameter to StartWaitDialog method, you will also see a cancel button allowing user to cancel the running task. You can react when user cancels the task as below. bool isCancelled; dialog.HasCanceled(out isCancelled); if (isCancelled) { MessageBox.Show("Cancelled"); } Finally, you can close the dialog when you complete the task running as below. int usercancel; dialog.EndWaitDialog(out usercancel); To help you quickly experience the above code, I have created a sample. It is available for download from GitHub. The sample creates a tool window with two buttons to demo the above explained scenarios. The tool window can be accessed by clicking View –> Other Windows -> ProgressDialogDemo Window

    Read the article

  • Using Lambdas for return values in Rhino.Mocks

    - by PSteele
    In a recent StackOverflow question, someone showed some sample code they’d like to be able to use.  The particular syntax they used isn’t supported by Rhino.Mocks, but it was an interesting idea that I thought could be easily implemented with an extension method. Background When stubbing a method return value, Rhino.Mocks supports the following syntax: dependency.Stub(s => s.GetSomething()).Return(new Order()); The method signature is generic and therefore you get compile-time type checking that the object you’re returning matches the return value defined by the “GetSomething” method. You could also have Rhino.Mocks execute arbitrary code using the “Do” method: dependency.Stub(s => s.GetSomething()).Do((Func<Order>) (() => new Order())); This requires the cast though.  It works, but isn’t as clean as the original poster wanted.  They showed a simple example of something they’d like to see: dependency.Stub(s => s.GetSomething()).Return(() => new Order()); Very clean, simple and no casting required.  While Rhino.Mocks doesn’t support this syntax, it’s easy to add it via an extension method. The Rhino.Mocks “Stub” method returns an IMethodOptions<T>.  We just need to accept a Func<T> and use that as the return value.  At first, this would seem straightforward: public static IMethodOptions<T> Return<T>(this IMethodOptions<T> opts, Func<T> factory) { opts.Return(factory()); return opts; } And this would work and would provide the syntax the user was looking for.  But the problem with this is that you loose the late-bound semantics of a lambda.  The Func<T> is executed immediately and stored as the return value.  At the point you’re setting up your mocks and stubs (the “Arrange” part of “Arrange, Act, Assert”), you may not want the lambda executing – you probably want it delayed until the method is actually executed and Rhino.Mocks plugs in your return value. So let’s make a few small tweaks: public static IMethodOptions<T> Return<T>(this IMethodOptions<T> opts, Func<T> factory) { opts.Return(default(T)); // required for Rhino.Mocks on non-void methods opts.WhenCalled(mi => mi.ReturnValue = factory()); return opts; } As you can see, we still need to set up some kind of return value or Rhino.Mocks will complain as soon as it intercepts a call to our stubbed method.  We use the “WhenCalled” method to set the return value equal to the execution of our lambda.  This gives us the delayed execution we’re looking for and a nice syntax for lambda-based return values in Rhino.Mocks. Technorati Tags: .NET,Rhino.Mocks,Mocking,Extension Methods

    Read the article

  • ASP.NET MVC CRUD Validation

    - by Ricardo Peres
    One thing I didn’t refer on my previous post on ASP.NET MVC CRUD with AJAX was how to retrieve model validation information into the client. We want to send any model validation errors to the client in the JSON object that contains the ProductId, RowVersion and Success properties, specifically, if there are any errors, we will add an extra Errors collection property. Here’s how: 1: [HttpPost] 2: [AjaxOnly] 3: [Authorize] 4: public JsonResult Edit(Product product) 5: { 6: if (this.ModelState.IsValid == true) 7: { 8: using (ProductContext ctx = new ProductContext()) 9: { 10: Boolean success = false; 11:  12: ctx.Entry(product).State = (product.ProductId == 0) ? EntityState.Added : EntityState.Modified; 13:  14: try 15: { 16: success = (ctx.SaveChanges() == 1); 17: } 18: catch (DbUpdateConcurrencyException) 19: { 20: ctx.Entry(product).Reload(); 21: } 22:  23: return (this.Json(new { Success = success, ProductId = product.ProductId, RowVersion = Convert.ToBase64String(product.RowVersion) })); 24: } 25: } 26: else 27: { 28: Dictionary<String, String> errors = new Dictionary<String, String>(); 29:  30: foreach (KeyValuePair<String, ModelState> keyValue in this.ModelState) 31: { 32: String key = keyValue.Key; 33: ModelState modelState = keyValue.Value; 34:  35: foreach (ModelError error in modelState.Errors) 36: { 37: errors[key] = error.ErrorMessage; 38: } 39: } 40:  41: return (this.Json(new { Success = false, ProductId = 0, RowVersion = String.Empty, Errors = errors })); 42: } 43: } As for the view, we need to change slightly the onSuccess JavaScript handler on the Single view: 1: function onSuccess(ctx) 2: { 3: if (typeof (ctx.Success) != 'undefined') 4: { 5: $('input#ProductId').val(ctx.ProductId); 6: $('input#RowVersion').val(ctx.RowVersion); 7:  8: if (ctx.Success == false) 9: { 10: var errors = ''; 11:  12: if (typeof (ctx.Errors) != 'undefined') 13: { 14: for (var key in ctx.Errors) 15: { 16: errors += key + ': ' + ctx.Errors[key] + '\n'; 17: } 18:  19: window.alert('An error occurred while updating the entity: the model contained the following errors.\n\n' + errors); 20: } 21: else 22: { 23: window.alert('An error occurred while updating the entity: it may have been modified by third parties. Please try again.'); 24: } 25: } 26: else 27: { 28: window.alert('Saved successfully'); 29: } 30: } 31: else 32: { 33: if (window.confirm('Not logged in. Login now?') == true) 34: { 35: document.location.href = '<% 1: : FormsAuthentication.LoginUrl %>?ReturnURL=' + document.location.pathname; 36: } 37: } 38: } The logic is as this: If the Edit action method is called for a new entity (the ProductId is 0) and it is valid, the entity is saved, and the JSON results contains a Success flag set to true, a ProductId property with the database-generated primary key and a RowVersion with the server-generated ROWVERSION; If the model is not valid, the JSON result will contain the Success flag set to false and the Errors collection populated with all the model validation errors; If the entity already exists in the database (ProductId not 0) and the model is valid, but the stored ROWVERSION is different that the one on the view, the result will set the Success property to false and will return the current (as loaded from the database) value of the ROWVERSION on the RowVersion property. On a future post I will talk about the possibilities that exist for performing model validation, stay tuned!

    Read the article

  • Don&rsquo;t Kill the Password

    - by Anthony Trudeau
    A week ago Mr. Honan from Wired.com penned an article on security he titled “Kill the Password: Why a String of Characters Can’t Protect Us Anymore.” He asserts that the password is not effective and a new solution is needed. Unfortunately, Mr. Honan was a victim of hacking. As a result he has a victim’s vendetta. His conclusion is ill conceived even though there are smatterings of truth and good advice. The password is a security barrier much like a lock on your door. In of itself it’s not guaranteeing protection. You can have a good password akin to a steel reinforced door with the best lock money can buy, or you can have a poor password like “password” which is like a sliding lock like on a bathroom stall. But, just like in the real world a lock isn’t always enough. You can have a lock, security system, video cameras, guard dogs, and even armed security guards; but none of that guarantees your protection. Even top secret government agencies can be breached by someone who is just that good (as dramatized in movies like Mission Impossible). And that’s the crux of it. There are real hackers out there that are that good. Killer coding ninja monkeys do exist! We still have locks on our doors, because they still serve their role. Passwords are no different. Security doesn’t end with the password. Most people would agree that stuffing your mattress with your life savings isn’t a good idea even if you have the best locks and security system. Most people agree its safest to have the money in a bank. Essentially this is compartmentalization. Compartmentalization extends to the online world as well. You’re at risk if your online banking accounts are linked to the same account as your social networks. This is especially true if you’re lackadaisical about linking those social networks to outside sources including apps. The object here is to minimize the damage that can be done. An attacker should not be able to get into your bank account, because they breached your Twitter account. It’s time to prioritize once you’ve compartmentalized. This simply means deciding how much security you want for the different compartments which I’ll call security zones. Social networking applications like Facebook provide a lot of security features. However, security features are almost always a compromise with privacy and convenience. It’s similar to an engineering adage, but in this case it’s security, convenience, and privacy – pick two. For example, you might use a safe instead of bank to store your money, because the convenience of having your money closer or the privacy of not having the bank records is more important than the added security. The following are lists of security do’s and don’ts (these aren’t meant to be exhaustive and each could be an article in of themselves): Security Do’s: Use strong passwords based on a phrase Use encryption whenever you can (e.g. HTTPS in Facebook) Use a firewall (and learn to use it properly) Configure security on your router (including port blocking) Keep your operating system patched Make routine backups of important files Realize that if you’re not paying for it, you’re the product Security Don’ts Link accounts if at all possible Reuse passwords across your security zones Use real answers for security questions (e.g. mother’s maiden name) Trust anything you download Ignore message boxes shown by your system or browser Forget to test your backups Share your primary email indiscriminately Only you can decide your comfort level between convenience, privacy, and security. Attackers are going to find exploits in software. Software is complex and depends on other software. The exploits are the responsibility of the software company. But your security is always your responsibility. Complete security is an illusion. But, there is plenty you can do to minimize the risk online just like you do in the physical world. Be safe and enjoy what the Internet has to offer. I expect passwords to be necessary just as long as locks.

    Read the article

  • Creando un File Upload

    - by jaullo
    Para iniciar hablaremos un poco sobre el control File Upload, de esta forma daremos una idea general de que es y como trabaja. El File Upload es un control de asp.net que permite que los usuarios seleccionen un archivo de cualquier ubicación en el equipo y lo suban a un directorio predeterminado a traves de una página asp.net. En principio este control esta limitado para no permitir subir archivos de mas de 4 MB. Sin embargo, desde el webconfig de nuestra aplicacón podremos cambiar ese valor, ya sea para aumentarlo o bien para disminuirlo. Nuestro ejemplo, se enfocará en crear un webcontrol que permita seleccionar un archivo y guardarlo, asi que empecemos. Lo primero será agregar a nuestra página un webcontrol que llamaremos Upload.ascx Posteriormente en nuestro webcontrol, agregamos el siguiente código: <table style="width: 100%">         <tr>             <td colspan="3">             <div align="center">                  <asp:Label ID="Label1" runat="server" Text="File Upload"></asp:Label>              </div>             </td>                    </tr>         <tr>             <td style="width: 456px" rowspan="2">                                                             &nbsp;</td>             <td style="width: 386px">                                <div align="center">                         <asp:FileUpload ID="FileUpload1" runat="server" Height="24px" Width="243px" />                         <span id="Span1" runat="server" />                            </div>                      </td>             <td rowspan="2">                                                             </td>         </tr>         <tr>             <td style="width: 386px">                 <div align="center">                      <asp:ImageButton Id="btnupload" runat="server" OnClick="btnupload_Click"                     ImageUrl="~/Styles/img/upload.png" style="text-align: center" />           </div>                  </td>         </tr>         <tr>             <td colspan="3">                 &nbsp;</td>         </tr>     </table>  De esta forma nuestro control deberá verse algo así   Por último en el code behin de nuestro control agregamos el código a nuestro boton, el cual será el encargado de leer el archivo que se encuentra en el File Upload y guardarlo en la ruta especificada.  Protected Sub btnupload_Click(ByVal sender As Object, ByVal e As System.Web.UI.ImageClickEventArgs) Handles btnupload.Click         If FileUpload1.HasFile Then             Dim fileExt As String             fileExt = System.IO.Path.GetExtension(FileUpload1.FileName)             If (fileExt = ".exe") Then                 Label1.Text = "You can´t upload .exe file!"             Else                 Try                     FileUpload1.SaveAs(decrpath & _                        FileUpload1.FileName)                     Label1.Text = "File name: " & _                       FileUpload1.PostedFile.FileName & "<br>" & _                       "File Size: " & _                       FileUpload1.PostedFile.ContentLength & " kb<br>" & _                       "Content type: " & _                       FileUpload1.PostedFile.ContentType                 Catch ex As Exception                     Label1.Text = "ERROR: " & ex.Message.ToString()                 End Try             End If         Else             Label1.Text = "You have not specified a file!"         End If            End Sub   Como vemos en el código anterior tambien hemos agregado otros elementos los cuales nos dirán el nombre del archivo, el tipo de contenido y el tamaño en kb una vez que el archivo ha sido súbido al servidor. Por último deben tomar en cuenta que decrpath es la ruta en donde será subido el archivo, la cual deben variar a su gusto.

    Read the article

  • Adobe Photoshop CS5 vs Photoshop CS5 extended

    - by Edward
    Adobe Photoshop has been an industry standard for most web designers & photographers worldwide. Photoshop CS5 has made photography editing much more refined and the composition process has become much easier than ever before.  To study the advantage of Photoshop CS5 extended over Photoshop CS5 we have written this comparison article, with both a Designer’s & Photographer’s perspective. Hopefully it shall help you in your buying/upgrade decision. Photoshop CS5 Photoshop CS5 has refining feature with powerful photography tools. It made editing process easy as fewer steps are involved to remove noise, add grain, create vignettes, correct lens distortions, sharpen, and create HDR images. It has quick image correction and color and tone control for professional purpose. Intelligent image editing and enhancement , extraordinary advanced compositing has made it a better tool than earlier versions for photographers. It allows users to accelerate workflow with fast performance on 64-bit Windows® and Mac hardware systems and smoother interactions due to more GPU-accelerated features. It also boasts of a state-of-the-art processing with Adobe Photoshop Camera Raw 6 and helps to maximize creative impact. It provides for tremendous precision and freedom. It allows user to easily select intricate image elements, such as hair and create realistic painting effects. It also allows to remove any image element and see the space fill in almost magically. It has easy access to core editing and streamlined work flow and flexible work ambience. It has creative tools and contents. Photoshop CS5 Extended Photoshop CS5 extended is quite innovative and has incorporated 3D elements to 2D artwork directly within digital imaging application, which enables user to do an easy on-ramp to 3D image creation. It also provides for 3D editing. It has intelligent image editing and enhancement. It offers advance composing and has extraordinary painting and drawing toolset. It provides for video and animation designing. It helps to work with specialized images for architecture, manufacturing, engineering, science, and medicine. Where CS5 extended scores over CS5 CS5 extended has many features, which were not included in CS5. These features make it score more over CS5. These features are: Technology for creating 3D extrusion 3D material library and picker Field depth for 3D 3D merging and scene composition improvements 3D workflow improvement Customization of 3D features Image based light source Shadow catcher for shadow creation Enhanced ray tracer Context sensitive widgets, which allows easy control of objects, lights and cameras. Overlays for materials and mesh boundaries Photoshop CS5 extended is far better than CS5 as it incorporates all the features of CS5 and have more advanced features. It allows 3D creation and editing and has other advanced tools to make it better. Redefining the Image-Editing Experience  : A Photographer’s point of View Photoshop CS5 delivers amazing features and creative options so even new users can perform advanced image manipulations and compositions. Breath taking image intelligence behind Content-Aware Fill magically removes any image detail or object, examines the surroundings and seamlessly fills in the space left behind. Lighting, tone and noise of the surrounding area can be matched. New Refine Edge makes nearly-impossible image selections possible. Masking was never easier, the toughest types of edges, such as hair and foliage seem easier to fix. To sum up following are few advantages of CS5 extended over previous versions 64-bit processing Content Aware Fill Refine Edge, “makes nearly-impossible image selections impossible” HDR Pro, including ghost artifact removal and HDR toning, which gives the look of HDR with a single exposure New brush options Improved image management with enhanced Adobe Bridge Lens corrections Improved black-and-white conversions Puppet Warp: Precisely reposition or warp any image element Adobe Camera Raw 6 Upgrade Buy Online Pricing and Availability Adobe Photoshop CS5 and CS5 Extended are available through Adobe Authorized Resellers & the Adobe Store. Estimated street price for Adobe Photoshop CS5 is US$699 and US$999 for Photoshop CS5 Extended. Upgrade pricing and volume licensing are also available. Related posts:10 Free Alternatives for Adobe Photoshop Software Web based Alternatives to Photoshop 15 Useful Adobe Illustrator Tutorials For Designers

    Read the article

  • The Evolution Of C#

    - by Paulo Morgado
    The first release of C# (C# 1.0) was all about building a new language for managed code that appealed, mostly, to C++ and Java programmers. The second release (C# 2.0) was mostly about adding what wasn’t time to built into the 1.0 release. The main feature for this release was Generics. The third release (C# 3.0) was all about reducing the impedance mismatch between general purpose programming languages and databases. To achieve this goal, several functional programming features were added to the language and LINQ was born. Going forward, new trends are showing up in the industry and modern programming languages need to be more: Declarative With imperative languages, although having the eye on the what, programs need to focus on the how. This leads to over specification of the solution to the problem in hand, making next to impossible to the execution engine to be smart about the execution of the program and optimize it to run it more efficiently (given the hardware available, for example). Declarative languages, on the other hand, focus only on the what and leave the how to the execution engine. LINQ made C# more declarative by using higher level constructs like orderby and group by that give the execution engine a much better chance of optimizing the execution (by parallelizing it, for example). Concurrent Concurrency is hard and needs to be thought about and it’s very hard to shoehorn it into a programming language. Parallel.For (from the parallel extensions) looks like a parallel for because enough expressiveness has been built into C# 3.0 to allow this without having to commit to specific language syntax. Dynamic There was been lots of debate on which ones are the better programming languages: static or dynamic. The fact is that both have good qualities and users of both types of languages want to have it all. All these trends require a paradigm switch. C# is, in many ways, already a multi-paradigm language. It’s still very object oriented (class oriented as some might say) but it can be argued that C# 3.0 has become a functional programming language because it has all the cornerstones of what a functional programming language needs. Moving forward, will have even more. Besides the influence of these trends, there was a decision of co-evolution of the C# and Visual Basic programming languages. Since its inception, there was been some effort to position C# and Visual Basic against each other and to try to explain what should be done with each language or what kind of programmers use one or the other. Each language should be chosen based on the past experience and familiarity of the developer/team/project/company and not by particular features. In the past, every time a feature was added to one language, the users of the other wanted that feature too. Going forward, when a feature is added to one language, the other will work hard to add the same feature. This doesn’t mean that XML literals will be added to C# (because almost the same can be achieved with LINQ To XML), but Visual Basic will have auto-implemented properties. Most of these features require or are built on top of features of the .NET Framework and, the focus for C# 4.0 was on dynamic programming. Not just dynamic types but being able to talk with anything that isn’t a .NET class. Also introduced in C# 4.0 is co-variance and contra-variance for generic interfaces and delegates. Stay tuned for more on the new C# 4.0 features.

    Read the article

< Previous Page | 947 948 949 950 951 952 953 954 955 956 957 958  | Next Page >