Search Results

Search found 17278 results on 692 pages for 'directory conventions'.

Page 516/692 | < Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >

  • Jenkins and Ant - ant.bat not recognized but env vars are set well

    - by Blue
    I'm trying to set Jenkins to work with Ant but I get the following error: Started by user anonymous Building in workspace C:.jenkins\workspace\CI Demo Checking out a fresh workspace because there's no workspace at C:.jenkins\workspace\CI Demo Cleaning local Directory . Checking out https:///svn/CI_Demo/trunk at revision '2013-10-27T19:34:31.549 +0000' At revision 6 [CI Demo] $ cmd.exe /C '"ant.bat jar && exit %%ERRORLEVEL%%"' 'ant.bat' is not recognized as an internal or external command, operable program or batch file. Build step 'Invoke Ant' marked build as failure Finished: FAILURE however, JAVA_HOME, ANT_HOME and I added the following to "Path": %ANT_HOME%\bin;%JAVA_HOME%\bin And as you can see the command is recognizable when executed in CMD: C:\Users\Administratorjava -version java version "1.7.0_45" Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) C:\Users\Administratorant -version Apache Ant(TM) version 1.9.2 compiled on July 8 2013 C:\Users\Administratorant.bat Buildfile: build.xml does not exist! Build failed I would appreciate our help. Thank you, N

    Read the article

  • qt cannot open input file 'c:\Qt\qt\lib\qtmaind.lib'

    - by robUK
    Hello, I am using qt 4.5 I have created a project and I want to compile on visual studio 2008 for windows mobile 6.0 So I have created the project files doing this: D:\Projects\Phone_PDA\Phone_PDA>set QMAKESPEC=win32-msvc2008 D:\Projects\Phone_PDA\Phone_PDA>qmake -tp vc The VS project was created. However, when I try and compile I get this error: LINK : fatal error LNK1181: cannot open input file 'c:\Qt\qt\lib\qtmaind.lib' However, when I check my librarys and includes under project properties in visual studio. I have this: Additional Include Directories c:\Qt\qt\include\QtCore c:\Qt\qt\include\QtGui c:\Qt\qt\include c:\Qt\qt\include\ActiveQt debug c:\Qt\qt\mkspecs\win32-msvc2008 Additional Library Directories c:\Qt\qt\lib Additional Dependencies c:\Qt\qt\lib\qtmaind.lib c:\Qt\qt\lib\QtGuid4.lib c:\Qt\qt\lib\QtCored4.lib However, when I browse to the directory c:\Qt\qt\lib all I have is: qtmain.prl and qtmaind.prl However, I don't have qtmaind.lib or qtmain.lib Many thanks for any suggestions,

    Read the article

  • PetStore 2.0 - java.lang.RuntimeException: javax.naming.NamingException: Lookup failed for 'jdbc/Pet

    - by Harry Pham
    I download PetStore 2.0 from https://blueprints.dev.java.net/servlets/ProjectDocumentList?folderID=5315&expandFolder=5315&folderID=0. I tried to build them in netbean 6.8 and I got this error SEVERE: Exception while preparing the app java.lang.RuntimeException: javax.naming.NamingException: Lookup failed for 'jdbc/PetstoreDB' in SerialContext [Root exception is javax.naming.NameNotFoundException: PetstoreDB not found] Seems like it cant find 'jdbc/PetStoreDB'. So I went back to the website and it look like I have to build the database via Ant script. So at the PetStore home directory, I run this command ant setup ant run And I got the same error when I try ant run [exec] Deprecated syntax, instead use: [exec] asadmin --port 4848 --host localhost --passwordfile /Users/KingdomHeart/.asadminpass --user admin deploy [options] ... [exec] com.sun.enterprise.admin.cli.CommandException: remote failure: Exception while preparing the app : java.lang.RuntimeException: javax.naming.NamingException: Lookup failed for 'jdbc/PetstoreDB' in SerialContext [Root exception is javax.naming.NameNotFoundException: PetstoreDB not found] [exec] [exec] [exec] Command deploy failed. Any idea what happen?

    Read the article

  • Setting ProcessStartInfo.WorkingDirectory to an UNC Path

    - by TJOHN
    I have a utility that I have written in VB.net that runs as a scheduled tasks. It internally calls another executable and it has to access a mapped drive. Apparently windows has issues with scheduled tasks accessing mapped drives when the user is not logged on, even when the authentication credentials are supplied to the task itself. Ok, fine. To get around this I just passed my application the UNC path. process.StartInfo.FileName = 'name of executable' process.StartInfo.WorkingDirectory = '\\unc path\' process.StartInfo.WindowStyle = ProcessWindowStyle.Hidden process.StartInfo.Arguments = 'arguments to executable' process.Start() This is the same implementation I used with the mapped drive, however using the UNC path, the process is not behaving as if the UNC path is the working directory. Are there any known issues setting ProcessStartInfo.WorkingDirectory to an UNC path? If not, what am I doing wrong?

    Read the article

  • How to play youtube videos which using Youtube.apk for android 2.1 platform

    - by danny
    I have a problem, I have a Youtube.apk that version is 1.5.20, so I would like to play Youtube videos in android 2.1 platform, but some problems: ======================================================================================= I/ActivityManager( 52): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10200000 cmp=com.google.android.youtube/.HomeA} I/ActivityManager( 52): Start proc com.google.android.youtube for activity com.google.android.youtube/.HomeActivity: pid=373 uid=10021 gids={3003} D/installd( 36): DexInv: --- BEGIN '/system/app/YouTube.apk' --- D/dalvikvm( 379): DexOpt: load 106ms, verify 597ms, opt 24ms D/installd( 36): DexInv: --- END '/system/app/YouTube.apk' (success) --- I/ActivityThread( 373): Publishing provider com.google.android.youtube.SuggestionProvider: com.google.android.youtube.suggest.SuggestionProvider I/YouTube ( 373): Distribution channel:mvapp-android-mid120 D/ ( 373): unable to unlink '/data/data/com.google.android.youtube/shared_prefs/youtube.xml.bak': No such file or directory (errno=2) I/AndroidRuntime( 373): AndroidRuntime onExit calling exit(-42) I/ActivityManager( 52): Process com.google.android.youtube (pid 373) has died. D/Zygote ( 33): Process 373 exited cleanly (214) I/UsageStats( 52): Unexpected resume of com.android.launcher while already resumed in com.google.android.youtube W/WindowManager( 52): Rebuild removed 2 windows but added 1 W/InputManagerService( 52): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@4406d498 ============================================================================================= so, How to fix this problem thanks~

    Read the article

  • Should I add the vcxproj.filter files to source control

    - by jschroedl
    While evaluating Visual Studio 2010 Beta 2, I see that in the converted directory my vcproj files have become vcxproj files. There are also vcxproj.filter files alongside each project which appear to contain a description of the folder structure (\Source Files, \Header Files, etc.). Do you think these filter files should be kept per-user or should the be shared across the whole dev group and checked into SCC? My current thinking is to check them in but wondered if there's any reasons not to do that or perhaps good reasons that I should definitely check them in. The obvious benefit is that the folder structures will match if I'm looking at someone else's machine but maybe they'd like to reorganize things logically??

    Read the article

  • Piping to findstr's input

    - by Gauthier
    I have a text file with a list of macro names (one per line). My final goal is to get a print of how many times the macro's name appears in the files of the current directory. The macro's names are in C:\temp\macros.txt. type C:\temp\macros.txt in the command prompt prints the list alright. Now I want to pipe that output to the standard input of findstr. type C:\temp\macros.txt | findstr *.ss (ss is the file type where I am looking for the macro names). This does not seem to work, I get no result (very fast, it does not seem to try at all). findstr <the first row of the macro list> *.ss does work. I also tried findstr *.ss < c:\temp\macros.txt with no success.

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • Open excel 2007 excel files and save as 97-2003 formats in VBA

    - by ABB
    I have a weird situation where I have a set of excel files, all having the extension .xls., in a directory where I can open all of them just fine in Excel 2007. The odd thing is that I cannot open them in Excel 2003, on the same machine, without opening the file first in 2007 and going and saving the file as an "Excel 97-2003 Workbook". Before I save the file as an "Excel 97-2003 Workbook" from Excel 2007, when I open the excel files in 2003 I get the error that the file is not in a recognizable format. So my question is: if I already have the excel file opened in 2007 and I already have the file name of the open file stored in a variable, programatically how can I mimic the action of going up to the "office button" in the upper right and selecting, "save as" and then selecting "Excel 97-2003 Workbook"? I've tried something like the below but it does not save the file at all: ActiveWorkbook.SaveAs TempFilePath & TempFileName & ".xls", FileFormat:=56 Thanks for any help or guidance!

    Read the article

  • JAMES Mailet development process

    - by itsadok
    I'm starting a project that involves writing mailets for Apache James. As far as I can tell, the only way to test a change in my code (on Windows) is through the following steps: Compile the mailet code Build a jar file containing the mailet Copy the jar file into the apps/james/SAR-INF/lib directory Start JAMES from run.bat Run test Stop JAMES by telneting to port 4555 and issuing a shutdown command (I guess on Linux a SIGTERM would suffice) I can automate all these steps using Ant and some scripting magic, but I was wondering if I was missing something. Does anyone here have experience developing mailets? Did you use a similar process, or is there an easier way? For example, is there a way to make a running James instance reload the mailets JAR?

    Read the article

  • Show PDF in iPad using CGPDF APIs

    - by AJ
    I have learned Apple has release CGPDF APIs in SDK 3.2 for drawing PDF context. What I understand from these APIs is that you can draw a PDF to a data object or a PDF file. You can then export it, may be, to your sandbox's directory OR add as an attachment in the mail. But I am not sure if we can use these APIs to read a PDF from application bundle and show it to the user page-by-page on the screen. What I want to do is open a PDF of a magazine in a magazine reader app. I was also wondering if we can identify the links in a PDF file and open them in the app. Let me know if have done OR doing anything like this. Thanks AJ

    Read the article

  • OpenVPN not connecting

    - by LandArch
    There have been a number of post similar to this, but none seem to satisfy my need. Plus I am a Ubuntu newbie. I followed this tutorial to completely set up OpenVPN on Ubuntu 12.04 server. Here is my server.conf file ################################################# # Sample OpenVPN 2.0 config file for # # multi-client server. # # # # This file is for the server side # # of a many-clients <-> one-server # # OpenVPN configuration. # # # # OpenVPN also supports # # single-machine <-> single-machine # # configurations (See the Examples page # # on the web site for more info). # # # # This config should work on Windows # # or Linux/BSD systems. Remember on # # Windows to quote pathnames and use # # double backslashes, e.g.: # # "C:\\Program Files\\OpenVPN\\config\\foo.key" # # # # Comments are preceded with '#' or ';' # ################################################# # Which local IP address should OpenVPN # listen on? (optional) local 192.168.13.8 # Which TCP/UDP port should OpenVPN listen on? # If you want to run multiple OpenVPN instances # on the same machine, use a different port # number for each one. You will need to # open up this port on your firewall. port 1194 # TCP or UDP server? proto tcp ;proto udp # "dev tun" will create a routed IP tunnel, # "dev tap" will create an ethernet tunnel. # Use "dev tap0" if you are ethernet bridging # and have precreated a tap0 virtual interface # and bridged it with your ethernet interface. # If you want to control access policies # over the VPN, you must create firewall # rules for the the TUN/TAP interface. # On non-Windows systems, you can give # an explicit unit number, such as tun0. # On Windows, use "dev-node" for this. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" ;dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel if you # have more than one. On XP SP2 or higher, # you may need to selectively disable the # Windows firewall for the TAP adapter. # Non-Windows systems usually don't need this. ;dev-node MyTap # SSL/TLS root certificate (ca), certificate # (cert), and private key (key). Each client # and the server must have their own cert and # key file. The server and all clients will # use the same ca file. # # See the "easy-rsa" directory for a series # of scripts for generating RSA certificates # and private keys. Remember to use # a unique Common Name for the server # and each of the client certificates. # # Any X509 key management system can be used. # OpenVPN can also use a PKCS #12 formatted key file # (see "pkcs12" directive in man page). ca "/etc/openvpn/ca.crt" cert "/etc/openvpn/server.crt" key "/etc/openvpn/server.key" # This file should be kept secret # Diffie hellman parameters. # Generate your own with: # openssl dhparam -out dh1024.pem 1024 # Substitute 2048 for 1024 if you are using # 2048 bit keys. dh dh1024.pem # Configure server mode and supply a VPN subnet # for OpenVPN to draw client addresses from. # The server will take 10.8.0.1 for itself, # the rest will be made available to clients. # Each client will be able to reach the server # on 10.8.0.1. Comment this line out if you are # ethernet bridging. See the man page for more info. ;server 10.8.0.0 255.255.255.0 # Maintain a record of client <-> virtual IP address # associations in this file. If OpenVPN goes down or # is restarted, reconnecting clients can be assigned # the same virtual IP address from the pool that was # previously assigned. ifconfig-pool-persist ipp.txt # Configure server mode for ethernet bridging. # You must first use your OS's bridging capability # to bridge the TAP interface with the ethernet # NIC interface. Then you must manually set the # IP/netmask on the bridge interface, here we # assume 10.8.0.4/255.255.255.0. Finally we # must set aside an IP range in this subnet # (start=10.8.0.50 end=10.8.0.100) to allocate # to connecting clients. Leave this line commented # out unless you are ethernet bridging. server-bridge 192.168.13.101 255.255.255.0 192.168.13.105 192.168.13.200 # Configure server mode for ethernet bridging # using a DHCP-proxy, where clients talk # to the OpenVPN server-side DHCP server # to receive their IP address allocation # and DNS server addresses. You must first use # your OS's bridging capability to bridge the TAP # interface with the ethernet NIC interface. # Note: this mode only works on clients (such as # Windows), where the client-side TAP adapter is # bound to a DHCP client. ;server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. push "route 192.168.13.1 255.255.255.0" push "dhcp-option DNS 192.168.13.201" push "dhcp-option DOMAIN blahblah.dyndns-wiki.com" ;push "route 192.168.20.0 255.255.255.0" # To assign specific IP addresses to specific # clients or if a connecting client has a private # subnet behind it that should also have VPN access, # use the subdirectory "ccd" for client-specific # configuration files (see man page for more info). # EXAMPLE: Suppose the client # having the certificate common name "Thelonious" # also has a small subnet behind his connecting # machine, such as 192.168.40.128/255.255.255.248. # First, uncomment out these lines: ;client-config-dir ccd ;route 192.168.40.128 255.255.255.248 # Then create a file ccd/Thelonious with this line: # iroute 192.168.40.128 255.255.255.248 # This will allow Thelonious' private subnet to # access the VPN. This example will only work # if you are routing, not bridging, i.e. you are # using "dev tun" and "server" directives. # EXAMPLE: Suppose you want to give # Thelonious a fixed VPN IP address of 10.9.0.1. # First uncomment out these lines: ;client-config-dir ccd ;route 10.9.0.0 255.255.255.252 # Then add this line to ccd/Thelonious: # ifconfig-push 10.9.0.1 10.9.0.2 # Suppose that you want to enable different # firewall access policies for different groups # of clients. There are two methods: # (1) Run multiple OpenVPN daemons, one for each # group, and firewall the TUN/TAP interface # for each group/daemon appropriately. # (2) (Advanced) Create a script to dynamically # modify the firewall in response to access # from different clients. See man # page for more info on learn-address script. ;learn-address ./script # If enabled, this directive will configure # all clients to redirect their default # network gateway through the VPN, causing # all IP traffic such as web browsing and # and DNS lookups to go through the VPN # (The OpenVPN server machine may need to NAT # or bridge the TUN/TAP interface to the internet # in order for this to work properly). ;push "redirect-gateway def1 bypass-dhcp" # Certain Windows-specific network settings # can be pushed to clients, such as DNS # or WINS server addresses. CAVEAT: # http://openvpn.net/faq.html#dhcpcaveats # The addresses below refer to the public # DNS servers provided by opendns.com. ;push "dhcp-option DNS 208.67.222.222" ;push "dhcp-option DNS 208.67.220.220" # Uncomment this directive to allow different # clients to be able to "see" each other. # By default, clients will only see the server. # To force clients to only see the server, you # will also need to appropriately firewall the # server's TUN/TAP interface. ;client-to-client # Uncomment this directive if multiple clients # might connect with the same certificate/key # files or common names. This is recommended # only for testing purposes. For production use, # each client should have its own certificate/key # pair. # # IF YOU HAVE NOT GENERATED INDIVIDUAL # CERTIFICATE/KEY PAIRS FOR EACH CLIENT, # EACH HAVING ITS OWN UNIQUE "COMMON NAME", # UNCOMMENT THIS LINE OUT. ;duplicate-cn # The keepalive directive causes ping-like # messages to be sent back and forth over # the link so that each side knows when # the other side has gone down. # Ping every 10 seconds, assume that remote # peer is down if no ping received during # a 120 second time period. keepalive 10 120 # For extra security beyond that provided # by SSL/TLS, create an "HMAC firewall" # to help block DoS attacks and UDP port flooding. # # Generate with: # openvpn --genkey --secret ta.key # # The server and each client must have # a copy of this key. # The second parameter should be '0' # on the server and '1' on the clients. ;tls-auth ta.key 0 # This file is secret # Select a cryptographic cipher. # This config item must be copied to # the client config file as well. ;cipher BF-CBC # Blowfish (default) ;cipher AES-128-CBC # AES ;cipher DES-EDE3-CBC # Triple-DES # Enable compression on the VPN link. # If you enable it here, you must also # enable it in the client config file. comp-lzo # The maximum number of concurrently connected # clients we want to allow. ;max-clients 100 # It's a good idea to reduce the OpenVPN # daemon's privileges after initialization. # # You can uncomment this out on # non-Windows systems. user nobody group nogroup # The persist options will try to avoid # accessing certain resources on restart # that may no longer be accessible because # of the privilege downgrade. persist-key persist-tun # Output a short status file showing # current connections, truncated # and rewritten every minute. status openvpn-status.log # By default, log messages will go to the syslog (or # on Windows, if running as a service, they will go to # the "\Program Files\OpenVPN\log" directory). # Use log or log-append to override this default. # "log" will truncate the log file on OpenVPN startup, # while "log-append" will append to it. Use one # or the other (but not both). ;log openvpn.log ;log-append openvpn.log # Set the appropriate level of log # file verbosity. # # 0 is silent, except for fatal errors # 4 is reasonable for general usage # 5 and 6 can help to debug connection problems # 9 is extremely verbose verb 3 # Silence repeating messages. At most 20 # sequential messages of the same message # category will be output to the log. ;mute 20 I am using Windows 7 as the Client and set that up accordingly using the OpenVPN GUI. That conf file is as follows: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" ;dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. proto tcp ;proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. blahblah.dyndns-wiki.com 1194 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) user nobody group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca "C:\\Program Files\OpenVPN\config\\ca.crt" cert "C:\\Program Files\OpenVPN\config\\ChadMWade-THINK.crt" key "C:\\Program Files\OpenVPN\config\\ChadMWade-THINK.key" # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 Not sure whats left to do.

    Read the article

  • Visual Studio 2010 localization resource files, how to add in strings automatically?

    - by JL
    I have a certain project that has a resource directory with a .resx for each language supported in the product. Right now I am adding these strings by hand using the visual studio 2010 IDE, but because there are a large number of strings, this manual management of these resources can get tricky, and something can easily get omitted in perhaps just 1 .resx file. Do you get a good resource addon for visual studio 2010, that will allow you to sync and validate a group of resx files? The built in functionality for handling resx seems the same as it was in 2008, and requires a lot of manual effort. I guess what would be nice would be to have the ability to define all resources in the main language, then have these strings carried across to the remaining languages automatically. Does such functionality exist? Even a good codeplex project perhaps?

    Read the article

  • PHP: Iterate through folders and display HTML contents

    - by Mestika
    Hi, I’m currently trying to develop a method to get a overview of all my different web templates I’ve created and (legally) downloaded over the years. I thought about a displaying them like Wordpress is previewing it’s templates view a small preview windows, displaying the concrete file with styles and everything. How to divide them into rows and columns and create AJAX modal window open on preview and pagination and so on I believe I can manage, but it is the concept itself about iterate over several folders then find all index.htm / index.html pages and displaying them. I’ve not worked very much with directories in PHP and the only references and code stumps I’ve found so far is just to list all the files in a certain directory like, what it contains. I would be really grateful if someone knew about a script, a function, snippet or just could get me a nudge in the right direction to create such a (probably simple) preview function. Sincere Mestika

    Read the article

  • Rails 3 / RVM - Acts_as_list compiled locally - Why Can't Ruby See This Gem?

    - by rabbit on rails
    I cannot figure out why rails/ruby cannot see this gem, despite each telling me that the gem is visible. I compiled this gem locally from a github branch since the main version seems to be broken in Rails 3. Or perhaps I am missing something else entirely. Ovid:lightserve dlipa$ gem list *** LOCAL GEMS *** .. acts_as_list (0.2.1) .. And Ovid:lightserve dlipa$ cat Gemfile ... gem "acts_as_list", "0.2.1" ... And Ovid:lightserve dlipa$ bundle install ... Using acts_as_list (0.2.1) Your bundle is updated! Use `bundle show [gemname]` to see where a bundled gem is installed But Ovid:lightserve dlipa$ r c RubyGems Environment: - RUBYGEMS VERSION: 1.6.1 - RUBY VERSION: 1.9.2 (2011-02-18 patchlevel 180) [x86_64-darwin10.6.0] - INSTALLATION DIRECTORY: /Users/dlipa/.rvm/gems/ruby-1.9.2-p180 - RUBY EXECUTABLE: /Users/dlipa/.rvm/rubies/ruby-1.9.2-p180/bin/ruby - EXECUTABLE DIRECTORY: /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-darwin-10 - GEM PATHS: - /Users/dlipa/.rvm/gems/ruby-1.9.2-p180 - /Users/dlipa/.rvm/gems/ruby-1.9.2-p180@global - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - :sources => ["http://rubygems.org/", "http://gems.github.com"] - REMOTE SOURCES: - http://rubygems.org/ - http://gems.github.com Loading development environment (Rails 3.0.5) ruby-1.9.2-p180 :001 > require 'acts_as_list' LoadError: no such file to load -- acts_as_list from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:239:in `require' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:239:in `block in require' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:225:in `block in load_dependency' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:596:in `new_constants_in' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:225:in `load_dependency' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:239:in `require' from (irb):1 from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/railties-3.0.5/lib/rails/commands/console.rb:44:in `start' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/railties-3.0.5/lib/rails/commands/console.rb:8:in `start' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/railties-3.0.5/lib/rails/commands.rb:23:in `<top (required)>' from script/rails:6:in `require' from script/rails:6:in `<main>' ruby-1.9.2-p180 :002 > And Ovid:lightserve dlipa$ irb ruby-1.9.2-p180 :001 > require 'acts_as_list' LoadError: no such file to load -- acts_as_list from /Users/dlipa/.rvm/rubies/ruby-1.9.2-p180/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /Users/dlipa/.rvm/rubies/ruby-1.9.2-p180/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from (irb):1 from /Users/dlipa/.rvm/rubies/ruby-1.9.2-p180/bin/irb:16:in `<main>' ruby-1.9.2-p180 :002 > Can anyone explain why this might be happening? I'd really appreciate it! ** UPDATE -- Response to Andrew Marshall's suggestion** I changed Gemfile to read the gem directly from git, but it did not resolve the problem. Does this mean that there is a problem with this gem? The error message is not very helpful ;-) Removed: Ovid:lightserve dlipa$ bundle show acts_as_list Could not find gem 'acts_as_list' in the current bundle. Then added back via: gem "acts_as_list", :git => "git://github.com/vpereira/acts_as_list.git" Ovid:lightserve dlipa$ bundle install Updating git://github.com/vpereira/acts_as_list.git ... Same problem even though bundle show matches the commit on that page: Ovid:lightserve dlipa$ bundle show acts_as_list /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/bundler/gems/acts_as_list-4cb76a8b198c Ovid:lightserve dlipa$ irb ruby-1.9.2-p180 :001 > require 'acts_as_list' LoadError: no such file to load -- acts_as_list from /Users/dlipa/.rvm/rubies/ruby-1.9.2-.. I just looked in the gem and it appears there is no file called 'acts_as_list' in the gem. So it appears to be idiosyncratic, albeit poorly reported by Rails/Ruby. The API appears to have changed to: ruby-1.9.2-p180 :003 > require 'active_record/acts/list' => nil ruby-1.9.2-p180 :004 > ActiveRecord::Acts::List => ActiveRecord::Acts::List

    Read the article

  • Jetty 7 + MySQL Config [java.lang.ClassNotFoundException: org.mortbay.jetty.webapp.WebAppContext]

    - by Scott Chang
    I've been trying to get a c3p0 db connection pool configured for Jetty, but I keep getting a ClassNotFoundException: 2010-03-14 19:32:12.028:WARN::Failed startup of context WebAppContext@fccada@fccada/phpMyAdmin,file:/usr/local/jetty/webapps/phpMyAdmin/,file:/usr/local/jetty/webapps/phpMyAdmin/ java.lang.ClassNotFoundException: org.mortbay.jetty.webapp.WebAppContext at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:313) at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:266) at org.eclipse.jetty.util.Loader.loadClass(Loader.java:90) at org.eclipse.jetty.xml.XmlConfiguration.nodeClass(XmlConfiguration.java:224) at org.eclipse.jetty.xml.XmlConfiguration.configure(XmlConfiguration.java:187) at org.eclipse.jetty.webapp.JettyWebXmlConfiguration.configure(JettyWebXmlConfiguration.java:77) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:975) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:586) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:349) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.server.handler.HandlerCollection.doStart(HandlerCollection.java:165) at org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:162) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.server.handler.HandlerCollection.doStart(HandlerCollection.java:165) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.server.handler.HandlerWrapper.doStart(HandlerWrapper.java:92) at org.eclipse.jetty.server.Server.doStart(Server.java:228) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.xml.XmlConfiguration$1.run(XmlConfiguration.java:990) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.jetty.xml.XmlConfiguration.main(XmlConfiguration.java:955) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.eclipse.jetty.start.Main.invokeMain(Main.java:394) at org.eclipse.jetty.start.Main.start(Main.java:546) at org.eclipse.jetty.start.Main.parseCommandLine(Main.java:208) at org.eclipse.jetty.start.Main.main(Main.java:75) I'm new to Jetty and I want to ultimately get phpMyAdmin and WordPress to run on it through Quercus and a JDBC connection. Here are my web.xml and jetty-web.xml files in my WEB-INF directory. jetty-web.xml: <?xml version="1.0"?> <!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://jetty.mortbay.org/configure.dtd"> <Configure class="org.mortbay.jetty.webapp.WebAppContext"> <New id="mysql" class="org.mortbay.jetty.plus.naming.Resource"> <Arg>jdbc/mysql</Arg> <Arg> <New class="com.mchange.v2.c3p0.ComboPooledDataSource"> <Set name="Url">jdbc:mysql://localhost:3306/mysql</Set> <Set name="User">user</Set> <Set name="Password">pw</Set> </New> </Arg> </New> </Configure> web.xml: <?xml version="1.0"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.2//EN" "http://java.sun.com/j2ee/dtds/web-app_2_2.dtd"> <web-app> <description>Caucho Technology's PHP Implementation</description> <resource-ref> <description>My DataSource Reference</description> <res-ref-name>jdbc/mysql</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> <servlet> <servlet-name>Quercus Servlet</servlet-name> <servlet-class>com.caucho.quercus.servlet.QuercusServlet</servlet-class> <!-- Specifies the encoding Quercus should use to read in PHP scripts. --> <init-param> <param-name>script-encoding</param-name> <param-value>UTF-8</param-value> </init-param> <!-- Tells Quercus to use the following JDBC database and to ignore the arguments of mysql_connect(). --> <init-param> <param-name>database</param-name> <param-value>jdbc/mysql</param-value> </init-param> <init-param> <param-name>ini-file</param-name> <param-value>WEB-INF/php.ini</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>Quercus Servlet</servlet-name> <url-pattern>*.php</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.php</welcome-file> </welcome-file-list> </web-app> I'm guessing that I'm missing a few jars or something. Currently I have placed the following jars in my WEB-INF/lib directory: c3p0-0.9.1.2.jar commons-dbcp-1.4.jar commons-pool-1.5.4.jar mysql-connector-java-5.1.12-bin.jar I have also tried to put these jars in JETTY-HOME/lib/ext, but to no avail... Someone please tell me what is wrong with my configuration. I'm sick of digging through Jetty's crappy documentation.

    Read the article

  • Adding Client Validation To DataAnnotations DataType Attribute

    - by srkirkland
    The System.ComponentModel.DataAnnotations namespace contains a validation attribute called DataTypeAttribute, which takes an enum specifying what data type the given property conforms to.  Here are a few quick examples: public class DataTypeEntity { [DataType(DataType.Date)] public DateTime DateTime { get; set; }   [DataType(DataType.EmailAddress)] public string EmailAddress { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This attribute comes in handy when using ASP.NET MVC, because the type you specify will determine what “template” MVC uses.  Thus, for the DateTime property if you create a partial in Views/[loc]/EditorTemplates/Date.ascx (or cshtml for razor), that view will be used to render the property when using any of the Html.EditorFor() methods. One thing that the DataType() validation attribute does not do is any actual validation.  To see this, let’s take a look at the EmailAddress property above.  It turns out that regardless of the value you provide, the entity will be considered valid: //valid new DataTypeEntity {EmailAddress = "Foo"}; .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Hmmm.  Since DataType() doesn’t validate, that leaves us with two options: (1) Create our own attributes for each datatype to validate, like [Date], or (2) add validation into the DataType attribute directly.  In this post, I will show you how to hookup client-side validation to the existing DataType() attribute for a desired type.  From there adding server-side validation would be a breeze and even writing a custom validation attribute would be simple (more on that in future posts). Validation All The Way Down Our goal will be to leave our DataTypeEntity class (from above) untouched, requiring no reference to System.Web.Mvc.  Then we will make an ASP.NET MVC project that allows us to create a new DataTypeEntity and hookup automatic client-side date validation using the suggested “out-of-the-box” jquery.validate bits that are included with ASP.NET MVC 3.  For simplicity I’m going to focus on the only DateTime field, but the concept is generally the same for any other DataType. Building a DataTypeAttribute Adapter To start we will need to build a new validation adapter that we can register using ASP.NET MVC’s DataAnnotationsModelValidatorProvider.RegisterAdapter() method.  This method takes two Type parameters; The first is the attribute we are looking to validate with and the second is an adapter that should subclass System.Web.Mvc.ModelValidator. Since we are extending DataAnnotations we can use the subclass of ModelValidator called DataAnnotationsModelValidator<>.  This takes a generic argument of type DataAnnotations.ValidationAttribute, which lucky for us means the DataTypeAttribute will fit in nicely. So starting from there and implementing the required constructor, we get: public class DataTypeAttributeAdapter : DataAnnotationsModelValidator<DataTypeAttribute> { public DataTypeAttributeAdapter(ModelMetadata metadata, ControllerContext context, DataTypeAttribute attribute) : base(metadata, context, attribute) { } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now you have a full-fledged validation adapter, although it doesn’t do anything yet.  There are two methods you can override to add functionality, IEnumerable<ModelValidationResult> Validate(object container) and IEnumerable<ModelClientValidationRule> GetClientValidationRules().  Adding logic to the server-side Validate() method is pretty straightforward, and for this post I’m going to focus on GetClientValidationRules(). Adding a Client Validation Rule Adding client validation is now incredibly easy because jquery.validate is very powerful and already comes with a ton of validators (including date and regular expressions for our email example).  Teamed with the new unobtrusive validation javascript support we can make short work of our ModelClientValidationDateRule: public class ModelClientValidationDateRule : ModelClientValidationRule { public ModelClientValidationDateRule(string errorMessage) { ErrorMessage = errorMessage; ValidationType = "date"; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If your validation has additional parameters you can the ValidationParameters IDictionary<string,object> to include them.  There is a little bit of conventions magic going on here, but the distilled version is that we are defining a “date” validation type, which will be included as html5 data-* attributes (specifically data-val-date).  Then jquery.validate.unobtrusive takes this attribute and basically passes it along to jquery.validate, which knows how to handle date validation. Finishing our DataTypeAttribute Adapter Now that we have a model client validation rule, we can return it in the GetClientValidationRules() method of our DataTypeAttributeAdapter created above.  Basically I want to say if DataType.Date was provided, then return the date rule with a given error message (using ValidationAttribute.FormatErrorMessage()).  The entire adapter is below: public class DataTypeAttributeAdapter : DataAnnotationsModelValidator<DataTypeAttribute> { public DataTypeAttributeAdapter(ModelMetadata metadata, ControllerContext context, DataTypeAttribute attribute) : base(metadata, context, attribute) { }   public override System.Collections.Generic.IEnumerable<ModelClientValidationRule> GetClientValidationRules() { if (Attribute.DataType == DataType.Date) { return new[] { new ModelClientValidationDateRule(Attribute.FormatErrorMessage(Metadata.GetDisplayName())) }; }   return base.GetClientValidationRules(); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Putting it all together Now that we have an adapter for the DataTypeAttribute, we just need to tell ASP.NET MVC to use it.  The easiest way to do this is to use the built in DataAnnotationsModelValidatorProvider by calling RegisterAdapter() in your global.asax startup method. DataAnnotationsModelValidatorProvider.RegisterAdapter(typeof(DataTypeAttribute), typeof(DataTypeAttributeAdapter)); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Show and Tell Let’s see this in action using a clean ASP.NET MVC 3 project.  First make sure to reference the jquery, jquery.vaidate and jquery.validate.unobtrusive scripts that you will need for client validation. Next, let’s make a model class (note we are using the same built-in DataType() attribute that comes with System.ComponentModel.DataAnnotations). public class DataTypeEntity { [DataType(DataType.Date, ErrorMessage = "Please enter a valid date (ex: 2/14/2011)")] public DateTime DateTime { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Then we make a create page with a strongly-typed DataTypeEntity model, the form section is shown below (notice we are just using EditorForModel): @using (Html.BeginForm()) { @Html.ValidationSummary(true) <fieldset> <legend>Fields</legend>   @Html.EditorForModel()   <p> <input type="submit" value="Create" /> </p> </fieldset> } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The final step is to register the adapter in our global.asax file: DataAnnotationsModelValidatorProvider.RegisterAdapter(typeof(DataTypeAttribute), typeof(DataTypeAttributeAdapter)); Now we are ready to run the page: Looking at the datetime field’s html, we see that our adapter added some data-* validation attributes: <input type="text" value="1/1/0001" name="DateTime" id="DateTime" data-val-required="The DateTime field is required." data-val-date="Please enter a valid date (ex: 2/14/2011)" data-val="true" class="text-box single-line valid"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here data-val-required was added automatically because DateTime is non-nullable, and data-val-date was added by our validation adapter.  Now if we try to add an invalid date: Our custom error message is displayed via client-side validation as soon as we tab out of the box.  If we didn’t include a custom validation message, the default DataTypeAttribute “The field {0} is invalid” would have been shown (of course we can change the default as well).  Note we did not specify server-side validation, but in this case we don’t have to because an invalid date will cause a server-side error during model binding. Conclusion I really like how easy it is to register new data annotations model validators, whether they are your own or, as in this post, supplements to existing validation attributes.  I’m still debating about whether adding the validation directly in the DataType attribute is the correct place to put it versus creating a dedicated “Date” validation attribute, but it’s nice to know either option is available and, as we’ve seen, simple to implement. I’m also working through the nascent stages of an open source project that will create validation attribute extensions to the existing data annotations providers using similar techniques as seen above (examples: Email, Url, EqualTo, Min, Max, CreditCard, etc).  Keep an eye on this blog and subscribe to my twitter feed (@srkirkland) if you are interested for announcements.

    Read the article

  • Setting up Metro 2.0 with Jetty 7

    - by trojanfoe
    This question relates to a previous question of mine. I am attempting to set-up a low overhead Web Container using Jetty 7 that I can deploy Web Services using Metro 2.0. I have installed the following Metro 2.0 libs into jetty/lib: webservices-extra-api.jar webservices-extra.jar webservices-rt.jar webservices-tools.jar And the following into a new jetty/lib/endorsed directory: jsr173_api.jar webservices-api.jar I start Jetty with the following script (Windows) to ensure that jetty/lib/endorsed is part of the 'endorsed library path' and to ensure that Jetty adds the webservices jars to its classpath: @echo off set JETTY_HOME=C:\dev\jetty-7.1.0 set JAVA_OPTS=-Xmx1024m -XX:MaxPermSize=512m -Djetty.home=%JETTY_HOME% -Djava.endorsed.dirs=%JETTY_HOME%\lib\endorsed -Djetty.class.path=%JETTY_HOME%\lib\webservices-rt.jar;%JETTY_HOME%\lib\endorsed\webservices-api.jar -DSTOP.PORT=8079 -DSTOP.KEY=jettykey pushd %JETTY_HOME% java %JAVA_OPTS% -jar start.jar popd However when I deploy a WebServices war file (for example Metro sample 'pricequote'), I get the following exception: java.lang.ClassNotFoundException: com.sun.xml.ws.transport.http.servlet.WSServletContextListener Can anyone help me with this please? I suspect it's related to the order of classes in Jetty's classpath?

    Read the article

  • "Call to undefined method" in my Facebook application

    - by Robert
    I have written my first php script to learn facebook API stuff. It goes like this: <?php require_once('facebook/client/facebook.php'); $facebook = new Facebook( '0fff13540b7ff2ae94be38463cb4fa67', '8a029798dd463be6c94cb8d9ca851696'); http://stackoverflow.com/questions/ask $fb_user = $facebook->require_login(); ?> Hello <fb:name uid='<?php echo $fb_user; ?>' useyou='false' possessive='true' />! Welcome to my first application! I put "facebook.php" in the same directory as my php script. However, after I deploy the php on a web server and link it with facebook and run it, I get an error saying: "Fatal error: Call to undefined method Facebook::require_login() in /home/a2660104/public_html/facebook-php-sdk-94fcb13/src/default.php on line 16" Could anyone help me a bit on this? I am a beginner to the facebook app programming.

    Read the article

  • DTD is prohibited in this XML document -- how to change permissions?

    - by frankadelic
    I am using a 3rd-party .NET component which requires an XML configuration file. I'm am using this in an ASP.NET application. I get an error when configure the XML with the following dtd: <!DOCTYPE prod-config SYSTEM "prod-config.dtd"> The error is as follows: For security reasons DTD is prohibited in this XML document. To enable DTD processing set the ProhibitDtd property on XmlReaderSettings to false and pass the settings into XmlReader.Create method. prod-config.dtd is sitting in the same directory as the XML config file. I don't have access to the component code to modify XmlReaderSettings, ProhibitDtd etc. Is there anotherway I can modify or tag the XML file to permit the DTD to be accessed? (FYI, the component is Oracle Coherence .NET client)

    Read the article

  • ASP.Net error: “The type ‘foo’ exists in both ”temp1.dll“ and ”temp2.dll" (pt 2)

    - by Greg
    I'm experiencing the same problem as in this question, but none of the answers fixed my problem. (edit: Setting the web.config batch attribute works, but that's a coverup, not a solution) The problem I'm having is with a User Control that I moved from the root directory to a subdirectory within the same Web Application project. It used to work fine before I moved it. When I moved it it started giving me the error message. It's saying that the class name exists in two dll files in Temporary ASP.NET Files. Sure enough, when I open Reflector, it's in two dlls. If I rename the class and ascx file, everything works fine. No usages of the original name exist within any of the files in my entire application. When I rename the file, I opened all of the dll files in Temporary ASP.NET Files with Reflector, and no references to the original class name exists. So where's this phantom reference coming from how can I fix this?

    Read the article

  • g++ not linking header files properly

    - by cambr
    I am using cygwin libraries to run C and C++ programs on Windows. gcc runs fine, but with g++, I get a long list of errors. I think these erros are because of linking problems with C libraries. Can you suggest what I need to do to fix this? beginning error lines: In file included from testgpp.cpp:1: /cygdrive/c/cygwin/bin/../lib/gcc/i686-pc-cygwin/3.4.4/include/c++/cstdio:52:19: stdio.h: No such file or directory In file included from testgpp.cpp:1: /cygdrive/c/cygwin/bin/../lib/gcc/i686-pc-cygwin/3.4.4/include/c++/cstdio:99: error: `::FILE' has not been declared /cygdrive/c/cygwin/bin/../lib/gcc/i686-pc-cygwin/3.4.4/include/c++/cstdio:100: error: `::fpos_t' has not been declared /cygdrive/c/cygwin/bin/../lib/gcc/i686-pc-cygwin/3.4.4/include/c++/cstdio:102: error: `::clearerr' has not been declared /cygdrive/c/cygwin/bin/../lib/gcc/i686-pc-cygwin/3.4.4/include/c++/cstdio:103: error: `::fclose' has not been declared /cygdrive/c/cygwin/bin/../lib/gcc/i686-pc-cygwin/3.4.4/include/c++/cstdio:104: error: `::feof' has not been declared the whole error dump: PasteBin

    Read the article

  • Django cannot find my templatetags, even though it's in INSTALLED_APPS and has a __init__.py

    - by Vivian Short
    I just installed django-compress (http://code.google.com/p/django-compress) into /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/compress. I added 'compress' to INSTALLED_APPS. In my template file, I wrote {% load compressed %}. I got the error: 'compressed' is not a valid tag library: Could not load template library from django.templatetags.compressed, No module named compressed I verified that there is an init.py in compress, as well as in compress/templatetags/. I tried putting the compress directory into PYTHONPATH. I ran python and wrote "import compress" and that worked. Your suggestions would be very appreciated! What else can I try?

    Read the article

  • Configure Forms based authentication in SharePoint 2010

    - by sreejukg
      Configuring form authentication is a straight forward task in SharePoint. Mostly public facing websites built on SharePoint requires form based authentication. Recently, one of the WCM implementation where I was included in the project team required registration system. Any internet user can register to the site and the site offering them some membership specific functionalities once the user logged in. Since the registration open for all, I don’t want to store all those users in Active Directory. I have decided to use Forms based authentication for those users. This is a typical scenario of form authentication in SharePoint implementation. To implement form authentication you require the following A data store where you are storing the users – technically this can be active directory, SQL server database, LDAP etc. Form authentication will redirect the user to the login page, if the request is not authenticated. In the login page, there will be controls that validate the user inputs against the configured data store. In this article, I am going to use SQL server database with ASP.Net membership API’s to configure form based authentication in SharePoint 2010. This article assumes that you have SQL membership database available. I already configured the membership and roles database using aspnet_regsql command. If you want to know how to configure membership database using aspnet_regsql command, read the below blog post. http://weblogs.asp.net/sreejukg/archive/2011/06/16/usage-of-aspnet-regsql-exe-in-asp-net-4.aspx The snapshot of the database after implementing membership and role manager is as follows. I have used the database name “aspnetdb_claim”. Make sure you have created the database and make sure your database contains tables and stored procedures for membership. Create a web application with claims based authentication. This article assumes you already created a web application using claims based authentication. If you want to enable forms based authentication in SharePoint 2010, you must enable claims based authentication. Read this post for creating a web application using claims based authentication. http://weblogs.asp.net/sreejukg/archive/2011/06/15/create-a-web-application-in-sharepoint-2010-using-claims-based-authentication.aspx  You make sure, you have selected enable form authentication, and then selected Membership provider and Role manager name. To make sure you are done with the configuration, navigate to central administration website, from central administration, navigate to the Web Applications page, select the web application and click on icon, you will see the authentication providers for the current web application. Go to the section Claims authentication types, and make sure you have enabled forms based authentication. As mentioned in the snapshot, I have named the membership provider as SPFormAuthMembership and role manager as SPFormAuthRoleManager. You can choose your own names as you need. Modify the configuration files(Web.Config) to enable form authentication There are three applications that needs to be configured to support form authentication. The following are those applications. Central Administration If you want to assign permissions to web application using the credentials from form authentication, you need to update Central Administration configuration. If you do not want to access form authentication credentials from Central Administration, just leave this step.  STS service application Security Token service is the service application that issues security token when users are logging in. You need to modify the configuration of STS application to make sure users are able to login. To find the STS application, follow the following steps Go to the IIS Manager Expand the sites Node, you will see SharePoint Web Services Expand SharePoint Web Services, you can see SecurityTokenServiceApplication Right click SecuritytokenServiceApplication and click explore, it will open the corresponding file system. By default, the path for STS is C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\WebServices\SecurityToken You need to modify the configuration file available in the mentioned location. The web application that needs to be enabled with form authentication. You need to modify the configuration of your web application to make sure your web application identifies users from the form authentication.   Based on the above, I am going to modify the web configuration. At end of each step, I have mentioned the expected output. I recommend you to go step by step and after each step, make sure the configuration changes are working as expected. If you do everything all together, and test your application at the end, you may face difficulties in troubleshooting the configuration errors. Modifications for Central Administration Web.Config Open the web.config for the Central administration in a text editor. I always prefer Visual Studio, for editing web.config. In most cases, the path of the web.config for the central administration website is as follows C:\inetpub\wwwroot\wss\VirtualDirectories\<port number> Make sure you keep a backup copy of the web.config, before editing it. Let me summarize what we are going to do with Central Administration web.config. First I am going to add a connection string that points to the form authentication database, that I created as mentioned in previous steps. Then I need to add a membership provider and a role manager with the corresponding connectionstring. Then I need to update the peoplepickerwildcards section to make sure the users are appearing in search results. By default there is no connection string available in the web.config of Central Administration. Add a connection string just after the configsections element. The below is the connection string I have used all over the article. <add name="FormAuthConnString" connectionString="Initial Catalog=yourdatabasename;data source=databaseservername;Integrated Security=SSPI;" /> Once you added the connection string, the web.config look similar to Now add membership provider to the code. In web.config for CA, there will be <membership> tag, search for it. You will find membership and role manager under the <system.web> element. Under the membership providers section add the below code… <add name="SPFormAuthMembership" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> After adding memberhip element, see the snapshot of the web.config. Now you need to add role manager element to the web.config. Insider providers element under rolemanager, add the below code. <add name="SPFormAuthRoleManager" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> After adding, your role manager will look similar to the following. As a last step, you need to update the people picker wildcard element in web.config, so that the users from your membership provider are available for browsing in Central Administration. Search for PeoplePickerWildcards in the web.config, add the following inside the <PeoplePickerWildcards> tag. <add key="SPFormAuthMembership" value="%" /> After adding this element, your web.config will look like After completing these steps, you can browse the users available in the SQL server database from central administration website. Go to the site collection administrator’s page from central administration. Select the site collection you have created for form authentication. Click on the people picker icon, choose Forms Auth and click on the search icon, you will see the users listed from the SQL server database. Once you complete these steps, make sure the users are available for browsing from central administration website. If you are unable to find the users, there must be some errors in the configuration, check windows event logs to find related errors and fix them. Change the web.config for STS application Open the web.config for STS application in text editor. By default, STS web.config does not have system.Web or connectionstrings section. Just after the System.Webserver element, add the following code. <connectionStrings> <add name="FormAuthConnString" connectionString="Initial Catalog=aspnetdb_claim;data source=sp2010_db;Integrated Security=SSPI;" /> </connectionStrings> <system.web> <roleManager enabled="true" cacheRolesInCookie="false" cookieName=".ASPXROLES" cookieTimeout="30" cookiePath="/" cookieRequireSSL="false" cookieSlidingExpiration="true" cookieProtection="All" createPersistentCookie="false" maxCachedResults="25"> <providers> <add name="SPFormAuthRoleManager" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> </providers> </roleManager> <membership userIsOnlineTimeWindow="15" hashAlgorithmType=""> <providers> <add name="SPFormAuthMembership" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> </providers> </membership> </system.web> See the snapshot of the web.config after adding the required elements. After adding this, you should be able to login using the credentials from SQL server. Try assigning a user as primary/secondary administrator for your site collection from Central Administration and login to your site using form authentication. If you made everything correct, you should be able to login. This means you have successfully completed configuration of STS Configuration of Web Application for Form Authentication As a last step, you need to modify the web.config of the form authentication web application. Once you have done this, you should be able to grant permissions to users stored in the membership database. Open the Web.config of the web application you created for form authentication. You can find the web.config for the application under the path C:\inetpub\wwwroot\wss\VirtualDirectories\<port number> Basically you need to add connection string, membership provider, role manager and update the people picker wild card configuration. Add the connection string (same as the one you added to the web.config in Central Administration). See the screenshot after the connection string has added. Search for <membership> in the web.config, you will find this inside system.web element. There will be other providers already available there. You add your form authentication membership provider (similar to the one added to Central Administration web.config) to the provider element under membership. Find the snapshot of membership configuration as follows. Search for <roleManager> element in web.config, add the new provider name under providers section of the roleManager element. See the snapshot of web.config after new provider added. Now you need to configure the peoplepickerwildcard configuration in web.config. As I specified earlier, this is to make sure, you can locate the users by entering a part of their username. Add the following line under the <PeoplePickerWildcards> element in web.config. See the screenshot of the peoplePickerWildcards element after the element has been added. Now you have completed all the setup for form authentication. Navigate to the web application. From the site actions -> site settings -> go to peope and groups Click on new -> add users, it will popup the people picker dialog. Click on the icon, select Form Auth, enter a username in the search textbox, and click on search icon. See the screenshot of admin search when I tried searching the users If it displays the user, it means you are done with the configuration. If you add users to the form authentication database, the users will be able to access SharePoint portal as normal.

    Read the article

  • MSBuild cannot find SGen when compiling a solution

    - by Jaxidian
    I've looked at several other SGen-related questions on here and either their answers don't apply or their answers don't fix this for me. I have installed several SDKs to fix this issue with no luck. Reference types should not be changed since this is the only place this is a problem. Once suggestion is to put SGen.exe into the C:\Windows\Microsoft.NET\Framework\v3.5 folder, but that's not been done on the box where this is not a problem. In this scenario, SGen.exe actually exists and is right where it's supposed to be, but MSBuild still is having issues with finding it for some reason! Background: We have a NAnt script that automates our builds. In this scenario, NAnt is calling MSBuild and MSBuild is generating the error claiming to be unable to find SGen. The project is .NET 3.5-based. I have my primary dev environment (64-bit Vista Ultimate) where the script works perfectly and I am attempting to duplicate it in a VM (64-bit Win 7 Ultimate). I THINK I have everything to the point where I should be good-to-go but this fails on the Win7 box (works perfectly on the Vista box). I have done some comparisons between the two boxes and they both look identical in this regard, but it still fails. For example, the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework's sdkInstallRootv2.0 value is set to C:\Program Files\Microsoft.NET\SDK\v2.0 64bit\ on both machines. In both machines, SGen.exe is in that path's bin subdirectory. NAnt Script: <target name="report-installer" depends="fail-if-environment-not-set"> <exec program="MSBuild.exe" basedir="${framework35.directory}"> <arg value="${tools.directory.current}\ReportInstaller\ReportInstaller.sln" /> <arg value="/p:Configuration=${buildconfiguration.current}" /> </exec> </target> The error message I get is this: report-installer: [exec] Microsoft (R) Build Engine Version 3.5.30729.4926 [exec] [Microsoft .NET Framework, Version 2.0.50727.4927] [exec] Copyright (C) Microsoft Corporation 2007. All rights reserved. [exec] [exec] Build started 4/8/2010 11:28:23 AM. [exec] Project "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.sln" on node 0 (default targets). [exec] Building solution configuration "Release|Any CPU". [exec] Project "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.sln" (1) is building "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.csproj" (2) on node 0 (default targets). [exec] Could not locate the .NET Framework SDK. The task is looking for the path to the .NET Framework SDK at the location specified in the SDKInstallRootv2.0 value of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework. You may be able to solve the problem by doing one of the following: 1.) Install the .NET Framework SDK. 2.) Manually set the above registry key to the correct location. [exec] CoreCompile: [exec] Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. [exec] C:\Windows\Microsoft.NET\Framework\v2.0.50727\Microsoft.Common.targets(1902,9): error MSB3091: Task failed because "sgen.exe" was not found, or the .NET Framework SDK v2.0 is not installed. The task is looking for "sgen.exe" in the "bin" subdirectory beneath the location specified in the SDKInstallRootv2.0 value of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework. You may be able to solve the problem by doing one of the following: 1.) Install the .NET Framework SDK v2.0. 2.) Manually set the above registry key to the correct location. 3.) Pass the correct location into the "ToolPath" parameter of the task. [exec] Done Building Project "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.csproj" (default targets) -- FAILED. [exec] Done Building Project "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.sln" (default targets) -- FAILED. [exec] [exec] Build FAILED. [exec] [exec] "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.sln" (default target) (1) -> [exec] "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.csproj" (default target) (2) -> [exec] (GenerateSerializationAssemblies target) -> [exec] C:\Windows\Microsoft.NET\Framework\v2.0.50727\Microsoft.Common.targets(1902,9): error MSB3091: Task failed because "sgen.exe" was not found, or the .NET Framework SDK v2.0 is not installed. The task is looking for "sgen.exe" in the "bin" subdirectory beneath the location specified in the SDKInstallRootv2.0 value of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework. You may be able to solve the problem by doing one of the following: 1.) Install the .NET Framework SDK v2.0. 2.) Manually set the above registry key to the correct location. 3.) Pass the correct location into the "ToolPath" parameter of the task. [exec] [exec] 0 Warning(s) [exec] 1 Error(s) [exec] [exec] Time Elapsed 00:00:00.24 [call] C:\Projects\Production\Source\reports.build(15,4): [call] External Program Failed: C:\Windows\Microsoft.NET\Framework\v3.5\MSBuild.exe (return code was 1) What am I doing wrong here that is causing MSBuild to STILL be unable to find SGen?

    Read the article

< Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >