Search Results

Search found 27931 results on 1118 pages for 'custom configuration'.

Page 239/1118 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • Unable to validate data. at System.Web.Configuration.MachineKeySection.GetDecodedData

    - by Ben Williams
    I have several websites which get approximately 3000 pageviews in total per day, and I get this viewstate error roughly 5-10 times per day, caught in global.asax: System.Web.HttpException: Unable to validate data. at System.Web.Configuration.MachineKeySection.GetDecodedData(Byte[] buf, Byte[] modifier, Int32 start, Int32 length, Int32& dataLength) at System.Web.UI.ObjectStateFormatter.Deserialize(String inputString) I have tried: hard-coding the machine key in web.config for all websites hard-coding the machien key in machine.config adding items to the pages section of the web.config for all websites. Machine key looks like: <machineKey validationKey="key goes here" decryptionKey="key goes here" validation="SHA1" decryption="AES" /> Pages section looks like: <pages renderAllHiddenFieldsAtTopOfForm="true" validateRequest="false" enableEventValidation="false" viewStateEncryptionMode="Never"> The errors are not related to application pool recycling as best I can tell, as the pool is set to recycle at every 100,000 requests. I am not running a web farm or web garden. Quite often I get two or three of these errors in a row, as if a user is getting an error, going back, and then clicking the link again. Anyone have any ideas?

    Read the article

  • Connecting Linux to WatchGuard Firebox SSL (OpenVPN client)

    Recently, I got a new project assignment that requires to connect permanently to the customer's network through VPN. They are using a so-called SSL VPN. As I am using OpenVPN since more than 5 years within my company's network I was quite curious about their solution and how it would actually be different from OpenVPN. Well, short version: It is a disguised version of OpenVPN. Unfortunately, the company only offers a client for Windows and Mac OS which shouldn't bother any Linux user after all. OpenVPN is part of every recent distribution and can be activated in a couple of minutes - both client as well as server (if necessary). WatchGuard Firebox SSL - About dialog Borrowing some files from a Windows client installation Initially, I didn't know about the product, so therefore I went through the installation on Windows 8. No obstacles (and no restart despite installation of TAP device drivers!) here and the secured VPN channel was up and running in less than 2 minutes or so. Much appreciated from both parties - customer and me. Of course, this whole client package and my long year approved and stable installation ignited my interest to have a closer look at the WatchGuard client. Compared to the original OpenVPN client (okay, I have to admit this is years ago) this commercial product is smarter in terms of file locations during installation. You'll be able to access the configuration and key files below your roaming application data folder. To get there, simply enter '%AppData%\WatchGuard\Mobile VPN' in your Windows/File Explorer and confirm with Enter/Return. This will display the following files: Application folder below user profile with configuration and certificate files From there we are going to borrow four files, namely: ca.crt client.crt client.ovpn client.pem and transfer them to the Linux system. You might also be able to isolate those four files from a Mac OS client. Frankly, I'm just too lazy to run the WatchGuard client installation on a Mac mini only to find the folder location, and I'm going to describe why a little bit further down this article. I know that you can do that! Feedback in the comment section is appreciated. Configuration of OpenVPN (console) Depending on your distribution the following steps might be a little different but in general you should be able to get the important information from it. I'm going to describe the steps in Ubuntu 13.04 (Raring Ringtail). As usual, there are two possibilities to achieve your goal: console and UI. Let's what it is necessary to be done. First of all, you should ensure that you have OpenVPN installed on your system. Open your favourite terminal application and run the following statement: $ sudo apt-get install openvpn network-manager-openvpn network-manager-openvpn-gnome Just to be on the safe side. The four above mentioned files from your Windows machine could be copied anywhere but either you place them below your own user directory or you put them (as root) below the default directory: /etc/openvpn At this stage you would be able to do a test run already. Just in case, run the following command and check the output (it's the similar information you would get from the 'View Logs...' context menu entry in Windows: $ sudo openvpn --config client.ovpn Pay attention to the correct path to your configuration and certificate files. OpenVPN will ask you to enter your Auth Username and Auth Password in order to establish the VPN connection, same as the Windows client. Remote server and user authentication to establish the VPN Please complete the test run and see whether all went well. You can disconnect pressing Ctrl+C. Simplifying your life - authentication file In my case, I actually set up the OpenVPN client on my gateway/router. This establishes a VPN channel between my network and my client's network and allows me to switch machines easily without having the necessity to install the WatchGuard client on each and every machine. That's also very handy for my various virtualised Windows machines. Anyway, as the client configuration, key and certificate files are located on a headless system somewhere under the roof, it is mandatory to have an automatic connection to the remote site. For that you should first change the file extension '.ovpn' to '.conf' which is the default extension on Linux systems for OpenVPN, and then open the client configuration file in order to extend an existing line. $ sudo mv client.ovpn client.conf $ sudo nano client.conf You should have a similar content to this one here: dev tunclientproto tcp-clientca ca.crtcert client.crtkey client.pemtls-remote "/O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server"remote-cert-eku "TLS Web Server Authentication"remote 1.2.3.4 443persist-keypersist-tunverb 3mute 20keepalive 10 60cipher AES-256-CBCauth SHA1float 1reneg-sec 3660nobindmute-replay-warningsauth-user-pass auth.txt Note: I changed the IP address of the remote directive above (which should be obvious, right?). Anyway, the required change is marked in red and we have to create a new authentication file 'auth.txt'. You can give the directive 'auth-user-pass' any file name you'd like to. Due to my existing OpenVPN infrastructure my setup differs completely from the above written content but for sake of simplicity I just keep it 'as-is'. Okay, let's create this file 'auth.txt' $ sudo nano auth.txt and just put two lines of information in it - username on the first, and password on the second line, like so: myvpnusernameverysecretpassword Store the file, change permissions, and call openvpn with your configuration file again: $ sudo chmod 0600 auth.txt $ sudo openvpn --config client.conf This should now work without being prompted to enter username and password. In case that you placed your files below the system-wide location /etc/openvpn you can operate your VPNs also via service command like so: $ sudo service openvpn start client $ sudo service openvpn stop client Using Network Manager For newer Linux users or the ones with 'console-phobia' I'm going to describe now how to use Network Manager to setup the OpenVPN client. For this move your mouse to the systray area and click on Network Connections => VPN Connections => Configure VPNs... which opens your Network Connections dialog. Alternatively, use the HUD and enter 'Network Connections'. Network connections overview in Ubuntu Click on 'Add' button. On the next dialog select 'Import a saved VPN configuration...' from the dropdown list and click on 'Create...' Choose connection type to import VPN configuration Now you navigate to your folder where you put the client files from the Windows system and you open the 'client.ovpn' file. Next, on the tab 'VPN' proceed with the following steps (directives from the configuration file are referred): General Check the IP address of Gateway ('remote' - we used 1.2.3.4 in this setup) Authentication Change Type to 'Password with Certificates (TLS)' ('auth-pass-user') Enter User name to access your client keys (Auth Name: myvpnusername) Enter Password (Auth Password: verysecretpassword) and choose your password handling Browse for your User Certificate ('cert' - should be pre-selected with client.crt) Browse for your CA Certificate ('ca' - should be filled as ca.crt) Specify your Private Key ('key' - here: client.pem) Then click on the 'Advanced...' button and check the following values: Use custom gateway port: 443 (second value of 'remote' directive) Check the selected value of Cipher ('cipher') Check HMAC Authentication ('auth') Enter the Subject Match: /O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server ('tls-remote') Finally, you have to confirm and close all dialogs. You should be able to establish your OpenVPN-WatchGuard connection via Network Manager. For that, click on the 'VPN Connections => client' entry on your Network Manager in the systray. It is advised that you keep an eye on the syslog to see whether there are any problematic issues that would require some additional attention. Advanced topic: routing As stated above, I'm running the 'WatchGuard client for Linux' on my head-less server, and since then I'm actually establishing a secure communication channel between two networks. In order to enable your network clients to get access to machines on the remote side there are two possibilities to enable that: Proper routing on both sides of the connection which enables both-direction access, or Network masquerading on the 'client side' of the connection Following, I'm going to describe the second option a little bit more in detail. The Linux system that I'm using is already configured as a gateway to the internet. I won't explain the necessary steps to do that, and will only focus on the additional tweaks I had to do. You can find tons of very good instructions and tutorials on 'How to setup a Linux gateway/router' - just use Google. OK, back to the actual modifications. First, we need to have some information about the network topology and IP address range used on the 'other' side. We can get this very easily from /var/log/syslog after we established the OpenVPN channel, like so: $ sudo tail -n20 /var/log/syslog Or if your system is quite busy with logging, like so: $ sudo less /var/log/syslog | grep ovpn The output should contain PUSH received message similar to the following one: Jul 23 23:13:28 ios1 ovpn-client[789]: PUSH: Received control message: 'PUSH_REPLY,topology subnet,route 192.168.1.0 255.255.255.0,dhcp-option DOMAIN ,route-gateway 192.168.6.1,topology subnet,ping 10,ping-restart 60,ifconfig 192.168.6.2 255.255.255.0' The interesting part for us is the route command which I highlighted already in the sample PUSH_REPLY. Depending on your remote server there might be multiple networks defined (172.16.x.x and/or 10.x.x.x). Important: The IP address range on both sides of the connection has to be different, otherwise you will have to shuffle IPs or increase your the netmask. {loadposition content_adsense} After the VPN connection is established, we have to extend the rules for iptables in order to route and masquerade IP packets properly. I created a shell script to take care of those steps: #!/bin/sh -eIPTABLES=/sbin/iptablesDEV_LAN=eth0DEV_VPNS=tun+VPN=192.168.1.0/24 $IPTABLES -A FORWARD -i $DEV_LAN -o $DEV_VPNS -d $VPN -j ACCEPT$IPTABLES -A FORWARD -i $DEV_VPNS -o $DEV_LAN -s $VPN -j ACCEPT$IPTABLES -t nat -A POSTROUTING -o $DEV_VPNS -d $VPN -j MASQUERADE I'm using the wildcard interface 'tun+' because I have multiple client configurations for OpenVPN on my server. In your case, it might be sufficient to specify device 'tun0' only. Simplifying your life - automatic connect on boot Now, that the client connection works flawless, configuration of routing and iptables is okay, we might consider to add another 'laziness' factor into our setup. Due to kernel updates or other circumstances it might be necessary to reboot your system. Wouldn't it be nice that the VPN connections are established during the boot procedure? Yes, of course it would be. To achieve this, we have to configure OpenVPN to automatically start our VPNs via init script. Let's have a look at the responsible 'default' file and adjust the settings accordingly. $ sudo nano /etc/default/openvpn Which should have a similar content to this: # This is the configuration file for /etc/init.d/openvpn## Start only these VPNs automatically via init script.# Allowed values are "all", "none" or space separated list of# names of the VPNs. If empty, "all" is assumed.# The VPN name refers to the VPN configutation file name.# i.e. "home" would be /etc/openvpn/home.conf#AUTOSTART="all"#AUTOSTART="none"#AUTOSTART="home office"## ... more information which remains unmodified ... With the OpenVPN client configuration as described above you would either set AUTOSTART to "all" or to "client" to enable automatic start of your VPN(s) during boot. You should also take care that your iptables commands are executed after the link has been established, too. You can easily test this configuration without reboot, like so: $ sudo service openvpn restart Enjoy stable VPN connections between your Linux system(s) and a WatchGuard Firebox SSL remote server. Cheers, JoKi

    Read the article

  • SQL SERVER – Fix – Agent Starting Error 15281 – SQL Server blocked access to procedure ‘dbo.sp_get_sqlagent_properties’ of component ‘Agent XPs’ because this component is turned off as part of the security configuration for this server

    - by Pinal Dave
    SQL Server Agent fails to start because of the error 15281 is a very common error. When you start to restart SQL Agent sometimes it will give following error. SQL Server blocked access to procedure ‘dbo.sp_get_sqlagent_properties’ of component ‘Agent XPs’ because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of ‘Agent XPs’ by using sp_configure. For more information about enabling ‘Agent XPs’, search for ‘Agent XPs’ in SQL Server Books Online. (Microsoft SQL Server, Error: 15281) To resolve this error, following script has to be executed on the server. sp_configure 'show advanced options', 1; GO RECONFIGURE; GO sp_configure 'Agent XPs', 1; GO RECONFIGURE GO When you run above script, it will give a very similar output as following on the screen. Now, if you try to restart SQL Agent it will just work fine. That’s it! Sometimes there is a simpler solution to complicated error. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Server Agent

    Read the article

  • Change the default SqlCommand CommandTimeout with configuration rather than recompile?

    - by robertc
    I am supporting an ASP.Net 3.5 web application and users are experiencing a timeout error after 30 seconds when trying to run a report. Looking around the web it seems it's easy enough to change the timeout in the code, unfortunately I'm not able to access the code and recompile. Is there anyway to configure the default for either the web app, the worker process, IIS or the whole machine? Here is the stack trace up to the point where it's in System.Data in case I'm missing some other problem: [SqlException (0x80131904): Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.] System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) +1948826 System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4844747 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194 System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +2392 System.Data.SqlClient.SqlDataReader.ConsumeMetaData() +33 System.Data.SqlClient.SqlDataReader.get_MetaData() +83 System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) +297 System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) +954 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) +162 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) +32 System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) +141 System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) +12 System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) +10 System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) +130 System.Data.Common.DbDataAdapter.Fill(DataTable[] dataTables, Int32 startRecord, Int32 maxRecords, IDbCommand command, CommandBehavior behavior) +162 System.Data.Common.DbDataAdapter.Fill(DataTable dataTable) +115 --Edit There must be something outside the code itself - I've downloaded the database and run it against the same web site installed on a test server and it runs for longer than 30 seconds and returns the report. I've compared the machine.config and web.config files from the .Net directory on the live and test and they seem the same, compared the two IIS setups, also looked at the SQL Server configuration and the only difference is that the live server is clustered on 64bit W2K3 while the test server is on 32bit.

    Read the article

  • Do You Know How OUM defines the four, basic types of business system testing performed on a project? Why not test your knowledge?

    - by user713452
    Testing is perhaps the most important process in the Oracle® Unified Method (OUM). That makes it all the more important for practitioners to have a common understanding of the various types of functional testing referenced in the method, and to use the proper terminology when communicating with each other about testing activities. OUM identifies four basic types of functional testing, which is sometimes referred to as business system testing.  The basic functional testing types referenced by OUM include: Unit Testing Integration Testing System Testing, and  Systems Integration Testing See if you can match the following definitions with the appropriate type above? A.  This type of functional testing is focused on verifying that interfaces/integration between the system being implemented (i.e. System under Discussion (SuD)) and external systems functions as expected. B.     This type of functional testing is performed for custom software components only, is typically performed by the developer of the custom software, and is focused on verifying that the several custom components developed to satisfy a given requirement (e.g. screen, program, report, etc.) interact with one another as designed. C.  This type of functional testing is focused on verifying that the functionality within the system being implemented (i.e. System under Discussion (SuD)), functions as expected.  This includes out-of-the -box functionality delivered with Commercial Off-The-Shelf (COTS) applications, as well as, any custom components developed to address gaps in functionality.  D.  This type of functional testing is performed for custom software components only, is typically performed by the developer of the custom software, and is focused on verifying that the individual custom components developed to satisfy a given requirement  (e.g. screen, program, report, etc.) functions as designed.   Check your answers below: (D) (B) (C) (A) If you matched all of the functional testing types to their definitions correctly, then congratulations!  If not, you can find more information in the Testing Process Overview and Testing Task Overviews in the OUM Method Pack.

    Read the article

  • Setting custom WCF-binding behaviour via .config file - why doesn't this work?

    - by Andrew Shepherd
    I am attempting to insert a custom behavior into my service client, following the example here. I appear to be following all of the steps, but I am getting a ConfigurationErrorsException. Is there anyone more experienced than me who can spot what I'm doing wrong? Here is the entire app.config file. <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <behaviors> <endpointBehaviors> <behavior name="ClientLoggingEndpointBehaviour"> <myLoggerExtension /> </behavior> </endpointBehaviors> </behaviors> <extensions> <behaviorExtensions> <add name="myLoggerExtension" type="ChatClient.ClientLoggingEndpointBehaviourExtension, ChatClient, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null"/> </behaviorExtensions> </extensions> <bindings> </bindings> <client> <endpoint behaviorConfiguration="ClientLoggingEndpointBehaviour" name="ChatRoomClientEndpoint" address="http://localhost:8016/ChatRoom" binding="wsDualHttpBinding" contract="ChatRoomLib.IChatRoom" /> </client> </system.serviceModel> </configuration> Here is the exception message: An error occurred creating the configuration section handler for system.serviceModel/behaviors: Extension element 'myLoggerExtension' cannot be added to this element. Verify that the extension is registered in the extension collection at system.serviceModel/extensions/behaviorExtensions. Parameter name: element (C:\Documents and Settings\Andrew Shepherd\My Documents\Visual Studio 2008\Projects\WcfPractice\ChatClient\bin\Debug\ChatClient.vshost.exe.config line 5) I know that I've correctly written the reference to the ClientLoggingEndpointBehaviourExtensionobject, because through the debugger I can see it being instantiated.

    Read the article

  • How can I use a custom TabItem control when databinding a TabControl in WPF?

    - by Russ
    I have a custom control that is derived from TabItem, and I want to databind that custom TabItem to a stock TabControl. I would rather avoid creating a new TabControl just for this rare case. This is what I have and I'm not having any luck getting the correct control to be loaded. In this case I want to use my ClosableTabItem control instead of the stock TabItem control. <TabControl x:Name="tabCases" IsSynchronizedWithCurrentItem="True" Controls:ClosableTabItem.TabClose="TabClosed" > <TabControl.ItemTemplate> <DataTemplate DataType="{x:Type Controls:ClosableTabItem}" > <TextBlock Text="{Binding Path=Id}" /> </DataTemplate> </TabControl.ItemTemplate> <TabControl.ContentTemplate> <DataTemplate DataType="{x:Type Entities:Case}"> <CallLog:CaseReadOnlyDisplay DataContext="{Binding}" /> </DataTemplate> </TabControl.ContentTemplate> </TabControl> EDIT: This is what I ended up with, rather than trying to bind a custom control. The "CloseCommand" im getting from a previous question. <Style TargetType="{x:Type TabItem}" BasedOn="{StaticResource {x:Type TabItem}}" > <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type TabItem}"> <Border Name="Border" Background="LightGray" BorderBrush="Black" BorderThickness="1" CornerRadius="25,0,0,0" SnapsToDevicePixels="True"> <StackPanel Orientation="Horizontal"> <ContentPresenter x:Name="ContentSite" VerticalAlignment="Center" HorizontalAlignment="Center" ContentSource="Header" Margin="20,1,5,1"/> <Button Command="{Binding Path=CloseCommand}" Cursor="Hand" DockPanel.Dock="Right" Focusable="False" Margin="1,1,5,1" Background="Transparent" BorderThickness="0"> <Image Source="/Russound.Windows;component/Resources/Delete.png" Height="10" /> </Button> </StackPanel> </Border> <ControlTemplate.Triggers> <Trigger Property="IsSelected" Value="True"> <Setter Property="FontWeight" Value="Bold" /> <Setter TargetName="Border" Property="Background" Value="LightBlue" /> <Setter TargetName="Border" Property="BorderThickness" Value="1,1,1,0" /> <Setter TargetName="Border" Property="BorderBrush" Value="DarkBlue" /> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style>

    Read the article

  • Can I autogenerate/compile code on-the-fly, at runtime, based upon values (like key/value pairs) parsed out of a configuration file?

    - by Kumba
    This might be a doozy for some. I'm not sure if it's even 100% implementable, but I wanted to throw the idea out there to see if I'm really off of my rocker yet. I have a set of classes that mimics enums (see my other questions for specific details/examples). For 90% of my project, I can compile everything in at design time. But the remaining 10% is going to need to be editable w/o re-compiling the project in VS 2010. This remaining 10% will be based on a templated version of my Enums class, but will generate code at runtime, based upon data values sourced in from external configuration files. To keep this question small, see this SO question for an idea of what my Enums class looks like. The templated fields, per that question, will be the MaxEnums Int32, Names String() array, and Values array, plus each shared implementation of the Enums sub-class (which themselves, represent the Enums that I use elsewhere in my code). I'd ideally like to parse values from a simple text file (INI-style) of key/value pairs: [Section1] Enum1=enum_one Enum2=enum_two Enum3=enum_three So that the following code would be generated (and compiled) at runtime (comments/supporting code stripped to reduce question size): Friend Shared ReadOnly MaxEnums As Int32 = 3 Private Shared ReadOnly _Names As String() = New String() _ {"enum_one", "enum_two", "enum_three"} Friend Shared ReadOnly Enum1 As New Enums(_Names(0), 1) Friend Shared ReadOnly Enum2 As New Enums(_Names(1), 2) Friend Shared ReadOnly Enum3 As New Enums(_Names(2), 4) Friend Shared ReadOnly Values As Enums() = New Enums() _ {Enum1, Enum2, Enum3} I'm certain this would need to be generated in MSIL code, and I know from reading that the two components to look at are CodeDom and Reflection.Emit, but I was wondering if anyone had working examples (or pointers to working examples) versus really long articles. I'm a hands-on learner, so I have to have example code to play with. Thanks!

    Read the article

  • nhibernate says 'mapping exception was unhandled' no persister for: MyNH.Domain.User

    - by mrblah
    Hi, I am using nHibernate and fluent. I created a User.cs: public class User { public virtual int Id { get; set; } public virtual string Username { get; set; } public virtual string Password { get; set; } public virtual string Email { get; set; } public virtual DateTime DateCreated { get; set; } public virtual DateTime DateModified { get; set; } } Then in my mappinds folder: public class UserMapping : ClassMap<User> { public UserMapping() { WithTable("ay_users"); Not.LazyLoad(); Id(x => x.Id).GeneratedBy.Identity(); Map(x => x.Username).Not.Nullable().WithLengthOf(256); Map(x => x.Password).Not.Nullable().WithLengthOf(256); Map(x => x.Email).Not.Nullable().WithLengthOf(100); Map(x => x.DateCreated).Not.Nullable(); Map(x => x.DateModified).Not.Nullable(); } } Using the repository pattern for the nhibernate blog: public class UserRepository : Repository<User> { } public class Repository<T> : IRepository<T> { public ISession Session { get { return SessionProvider.GetSession(); } } public T GetById(int id) { return Session.Get<T>(id); } public ICollection<T> FindAll() { return Session.CreateCriteria(typeof(T)).List<T>(); } public void Add(T product) { Session.Save(product); } public void Remove(T product) { Session.Delete(product); } } public interface IRepository<T> { T GetById(int id); ICollection<T> FindAll(); void Add(T entity); void Remove(T entity); } public class SessionProvider { private static Configuration configuration; private static ISessionFactory sessionFactory; public static Configuration Configuration { get { if (configuration == null) { configuration = new Configuration(); configuration.Configure(); configuration.AddAssembly(typeof(User).Assembly); } return configuration; } } public static ISessionFactory SessionFactory { get { if (sessionFactory == null) sessionFactory = Configuration.BuildSessionFactory(); return sessionFactory; } } private SessionProvider() { } public static ISession GetSession() { return SessionFactory.OpenSession(); } } My config: <?xml version="1.0" encoding="utf-8" ?> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property> <property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property> <property name="connection.connection_string">Server=.\SqlExpress;Initial Catalog=TestNH;User Id=dev;Password=123</property> <property name="show_sql">true</property> </session-factory> </hibernate-configuration> I created a console application to test the output: static void Main(string[] args) { Console.WriteLine("starting..."); UserRepository users = new UserRepository(); User user = users.GetById(1); Console.WriteLine("user is null: " + (null == user)); if(null != user) Console.WriteLine("User: " + user.Username); Console.WriteLine("ending..."); Console.ReadLine(); } Error: nhibernate says 'mapping exception was unhandled' no persister for: MyNH.Domain.User What could be the issue, I did do the mapping?

    Read the article

  • Setting custom behaviour via .config file - why doesn't this work?

    - by Andrew Shepherd
    I am attempting to insert a custom behavior into my service client, following the example here. I appear to be following all of the steps, but I am getting a ConfigurationErrorsException. Is there anyone more experienced than me who can spot what I'm doing wrong? Here is the entire app.config file. <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <behaviors> <endpointBehaviors> <behavior name="ClientLoggingEndpointBehaviour"> <myLoggerExtension /> </behavior> </endpointBehaviors> </behaviors> <extensions> <behaviorExtensions> <add name="myLoggerExtension" type="ChatClient.ClientLoggingEndpointBehaviourExtension, ChatClient, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null"/> </behaviorExtensions> </extensions> <bindings> </bindings> <client> <endpoint behaviorConfiguration="ClientLoggingEndpointBehaviour" name="ChatRoomClientEndpoint" address="http://localhost:8016/ChatRoom" binding="wsDualHttpBinding" contract="ChatRoomLib.IChatRoom" /> </client> </system.serviceModel> </configuration> Here is the exception message: An error occurred creating the configuration section handler for system.serviceModel/behaviors: Extension element 'myLoggerExtension' cannot be added to this element. Verify that the extension is registered in the extension collection at system.serviceModel/extensions/behaviorExtensions. Parameter name: element (C:\Documents and Settings\Andrew Shepherd\My Documents\Visual Studio 2008\Projects\WcfPractice\ChatClient\bin\Debug\ChatClient.vshost.exe.config line 5) I know that I've correctly written the reference to the ClientLoggingEndpointBehaviourExtensionobject, because through the debugger I can see it being instantiated.

    Read the article

  • If some standards apply when "it depends" then should I stick with custom approaches?

    - by Travis J
    If I have an unconventional approach which works better than the industry standard, should I just stick with it even though in principal it violates those standards? What I am talking about is referential integrity for relational database management systems. The standard for enforcing referential integrity is to CASCADE delete. In practice, this is just not going to work all the time. In my current case, it does not. The alternative suggested is to either change the reference to NULL, DEFAULT, or just to take NO ACTION - usually in the form of a "soft delete". I am all about enforcing referential integrity. Love it. However, sometimes it just does not fully apply to use all the standards in practice. My approach has been to slightly abandon a small part of one of those practices which is the part about leaving "hanging references" around. Oops. The trade off is plentiful in this situation I believe. Instead of having deprecated data in the production database, a splattering of "soft delete" logic all across my controllers (and views sometimes depending on how far down the chain the soft delete occurred), and the prospect of queries taking longer and longer - instead of all that - I now have a recycle bin and centralized logic. The only tradeoff is that I must explicitly manage the possibility of "hanging references" which can be done through generics with one class. Any thoughts?

    Read the article

  • UppercuT &ndash; Custom Extensions Now With PowerShell and Ruby

    Arguably, one of the most powerful features of UppercuT (UC) is the ability to extend any step of the build process with a pre, post, or replace hook. This customization is done in a separate location from the build so you can upgrade without wondering if you broke the build. There is a hook before each step of the build has run. There is a hook after. And back to power again, there is a replacement hook. If you dont like what the step is doing and/or you want to replace its entire functionality,...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Visual Studio 2010 Winform Application &ndash; Unable to resolve custom assemblies?

    - by Harish Ranganathan
    Recently I surfaced a problem where, one of my friend had a tough time in getting rid of an assembly reference error.  Despite adding reference to the assembly, while referencing it in code, it was spitting out the “The type or namespace name ‘ASSEMBLYNAME’ could not be found” error.   This was a migration project and owing to the above error, it was throwing another 100 errors. We tried adding reference to the assembly in other projects and it was not even resolving the namespace while typing out in the using section. Upon further digging into the error warnings, it indicated something to do with the .NET Framework targeted i.e. 4.0.  My suspicion grew since the target framework was 4.0 and the assembly should be able to be recognized.  Then, when we checked “Project – “<APPNAME> Properties…”, the issue was with the default target framework which is “.NET Framework 4 Client Profile” By default, Visual Studio 2010 creates Windows Forms App/WPF Apps with the Target Framework set to .NET Framework 4 Client Profile.  This is to minimize the framework size required to be bundled along with the app. Client Profile is new feature since .NET 3.5 SP1 that allows users to package a minified version of .NET Framework that doesn’t include stuff such as ASP.NET, Server programming assemblies and few other assemblies which are typically never used in the Desktop Applications. Since the .NET Framework client profile is a minified version, it doesn’t contain all the assemblies related to Web services and other deprecated assemblies.  However, this application is a migration app and needed some of the references from Services and hence couldn’t run. Once, we changed the Target Framework to .NET Framework 4 instead of the default client profile, the application compiled. Here is link to a very nice article that explains the features of .NET Framework 4 client Profile, the assemblies supported by default etc., http://blogs.msdn.com/b/jgoldb/archive/2010/04/12/what-s-new-in-net-framework-4-client-profile-rtm.aspx Cheers !!

    Read the article

  • Wolfram is out, any alternatives? Or how to go custom?

    - by Patrick
    We were originally planning on using wolfram alpha api for a new project but unfortunately the cost was entirely way to high for what we were using it for. Essentially what we were doing is calculating the nutrition facts for food. (http://www.wolframalpha.com/input/?i=chicken+breast+with+broccoli). Before taking the step of trying to build something that may work in its place for this use case is there any open source code anywhere that can do this kind of analysis and compile the data? The hardest part in my opinion is what it has for assumptions and where it gets that data to power the calculations. Or another way to put it is, I cannot seem to wrap my head around building something that computes user input to return facts and knowledge. I know if I can convert the user input into some standardized form I can then compare that to a nutrition fact database to pull in the information I need. Does anyone know of any solutions to re-create this or APIs that can provide this kind of analysis? Thanks for any advice. I am trying to figure out if this project is dead in the water before it even starts. This kind of programming is well beyond me so I can only hope for an API, open source, or some kind of analysis engine to interpret user input when I know what kind of data they are entering (measurements and food).

    Read the article

  • VS2010 / Code Analysis: Turn off a rule for a project without custom ruleset....

    - by TomTom
    ...any change? The scenario is this: For our company we develop a standard how code should look. This will be the MS full rule set as it looks now. For some specific projects we may want to turn off specific rules. Simply because for a specific project this is a "known exception". Example? CA1026 - while perfectly ok in most cases, there are 1-2 specific libraries we dont want to change those. We also want to avoid having a custom rule set. OTOH putting in a suppress attribute on every occurance gets pretty convoluted pretty fast. Any way to turn off a code analysis warning for a complete assembly without a custom rule set? We rather have that in a specific file (GlobalSuppressions.cs) than in a rule set for maintenance reasons, and to be more explicit ;)

    Read the article

  • links for 2010-12-10

    - by Bob Rhubart
    Oracle VM Blade Cluster Reference Configuration (InfraRed) "All components listed in the reference configuration have been tested together by Oracle, reducing the need for customer testing and the time-consuming and complex effort of designing and deploying a stable configuration." -- Ferhat Hatay (tags: oracle virtualization clustering) White Paper: Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration  The Oracle VM blade cluster reference configuration described in this paper provides a complete and fully tested virtualized stack that can reduce deployment time by weeks or months while also reducing risk and improving application performance. (tags: oracle otn virtualization infrastructure) White Paper: Best Practices and Guidelines for Deploying the Oracle VM Blade Cluster Reference Configuration This paper provides recommendations and best practices for optimizing virtualization infrastructures when deploying the Oracle VM blade cluster reference configuration.  (tags: oracle otn virtualization clustering) Your Most Familiar Processes - Rethink before using E2.0 | Enterprise 2.0 Blogs "Imagine what gains your organization could have by asking basic questions and reviewing your familiar processes before setting up even the most fundamental E2.0 technologies to support them!" -- John Brunswick (tags: oracle enterprise2.0 otn) Oracle's Global Single Schema (Oracle Master Data Management) "The success of all business processes depends on the availability of accurate master data. Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business." -- David Butler (tags: oracle otn mdm entarch businessprocess) One step further towards proven results: IT Strategies from Oracle Oracle ACE Douwe Pieter van den Bos shares his thoughts on "IT Strategies from Oracle" in this Google translation of his original Dutch post. (tags: oracle itso entarch) The Underground Oracle VM Manual Just in time for the holidays! Roddy Rodstein's epic 354-page manual is now available in a single pdf.. (tags: oracle otn virtualization oraclevm)

    Read the article

  • ASP.NET MVC3 Custom Membership Provider - The membership provider name specified is invalid.

    - by David Lively
    I'm implementing a custom membership provider, and everything seems to go swimmingly until I create a MembershipUser object. At that point, I receive the error: The membership provider name specified is invalid. Parameter name: providerName In web.config the membership key is <membership defaultProvider="MembersProvider"> <providers> <clear/> <add name="MembersProvider" type="Members.Providers.MembersProvider" connectionStringName="ApplicationServices" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="DeviceDatabase" /> </providers> </membership> When creating the MembershipUser object from my custom User class: public static MembershipUser ToMembershipUser(User user) { MembershipUser member = new MembershipUser ("MembersProvider" , user.Name , user.Id , user.EmailAddress , user.PasswordQuestion , user.Comment , user.IsApproved , user.IsLockedOut , user.DateCreated , user.LastLoginDate ?? DateTime.MinValue , user.LastActivityDate ?? DateTime.MinValue , user.LastPasswordChangedDate ?? DateTime.MinValue , user.LastLockoutDate ?? DateTime.MinValue ); return member; } (I realize I could probably just inherit my User class from MembershipUser, but it's already part of an existing class hierarchy. I honestly think this is the first time I've encountered a legitimate need for for multiple inheritance!) My feeling is that the new MembershipUser(...) providerName parameter is supposed to match what's set in web.config, but, since they match already, I'm at a loss as to how to proceed. Is there a convenient way to get the name of the active membership provider in code? I'm starting to think that using the built-in membership system is overkill and more trouble than it's worth. Edit Not sure if it's relevant, but the custom membership provider class is in a class library, not the main WAP project. Update Here's the contents of the System.Web.Security.Membership.Provider object as show in the VS2010 command window: >eval System.Web.Security.Membership.Provider {Members.Providers.MembersProvider} [Members.Providers.MembersProvider]: {Members.Providers.MembersProvider} base {System.Configuration.Provider.ProviderBase}: {Members.Providers.MembersProvider} ApplicationName: null EnablePasswordReset: true EnablePasswordRetrieval: false MaxInvalidPasswordAttempts: 5 MinRequiredNonAlphanumericCharacters: 0 MinRequiredPasswordLength: 6 PasswordAttemptWindow: 10 PasswordFormat: Function evaluation was aborted. PasswordStrengthRegularExpression: Cannot evaluate expression because debugging information has been optimized away . RequiresQuestionAndAnswer: Cannot evaluate expression because debugging information has been optimized away . RequiresUniqueEmail: Cannot evaluate expression because debugging information has been optimized away .

    Read the article

  • Wordpress auto-generated "canonical" links - how to add a custom URL parameter?

    - by kiko
    Hello - Does anyone know how to modify the Wordpress canonical links to add a custom URL parameter? I have a Wordpress site with a page that queries a separate (non-Wordpress) database. I passed the URL parameter "pubID" to display individual books and it is working OK. Example: http://www.uglyducklingpresse.org/catalog/browse/item/?pubID=63 But the individual book pages are not showing up properly in Google - the ?pubID parameter is stripped out. I think maybe this is because all the item pages have the same auto-generated "canonical" URL link tag in the source - one with the "pubID" parameter stripped out. Example: link rel='canonical' href='http://www.uglyducklingpresse.org/catalog/browse/item/' Is there a way to perhaps edit .htaccess to add a custom URL parameter to Wordpress, so that the parameter is not stripped out by permalinks and the "canonical" links? Or maybe there's another solution ... Thank you for any ideas!

    Read the article

  • Integrating Simple Machines Forum in my custom membership site - login not working?

    - by Ali
    Hi guys I'm trying to integrate simple machines forum into my custom made membership website. I'm using the SMF API which is available here at http://download.simplemachines.org/?tools. The thing is that its working on my localhost testing server - however on my online server where the system is hosted - its not working. I have set it up so that when the user log into my own custom CMS membership site, he/she is logged in automatically to his/her corresponding account on the forum. However it works on my localhost but online its not working at all.. I log into my site and then browse to the forum to find out I havent been logged in there :( - I think its not creating the cookies or registering the session.. where should I look here. Please do help.

    Read the article

  • WIX/DTF - How do I elevate deffered managed custom actions?

    - by Jarrod
    We are working on creating an installer using WIX. We have a couple of custom actions that are going to require elevation to run on Vista. Our MSI obviously elevates itself, because I get a UAC prompt and it writes keys to HKLM. Our managed DTF custom actions are apparently not running elevated. If I start the MSI from an elevated command prompt, the installation runs fine. The only workaround I have seen suggested (link) is to use another exe with a manifest as a bootstrapper to run the MSI. This is not a good solution, as it doesn't allow the user to modify the installation by selecting "Change" from the Add/Remove programs dialog. Our install has multiple features that can be customized, so we want this functionality. Does anybody know how to do this?

    Read the article

  • In Drupal 6, is there a way to take a custom field from the latest post to a taxonomy term, and disp

    - by user278457
    The title for this question pretty much sums up what I'm asking. I've got a list of taxonomy terms, and I'm using a view to display the latest post to each one. I'd like to also display a custom field set up in CCK just under this. Currently, I'm just using "date updated" of the taxonomy term itself which was easy to set up in views. I'd like to drill a little deeper and get the custom "event date" field I've added to the content type last posted to the taxonomy term I'm "viewing". I've got a feeling I'm going to have to write my own database query for this. If (I can avoid that){ How do I set up such a view? } Else{ What's the best practice for including lower level database queries alongside views? }

    Read the article

  • Why wouldn't a flex remoteobject work within a custom component?

    - by Gary
    Please enlighten this flex noob. I have a remoteobject within my main.mxml. I can call a function on the service from an init() function on my main.mxml, and my java debugger triggers a breakpoint. When I move the remoteobject declaration and function call into a custom component (that is declared within main.mxml), the remote function on java-side no longer gets called, no breakpoints triggered, no errors, silence. How could this be? No spelling errors, or anything like that. mxml code: &ltmx:RemoteObject id="myService" destination="remoteService" endpoint="$(Application.application.home}/messagebroker/amf" &gt &lt/mx:RemoteObject%gt function call is just 'myService.getlist();' when I move it to a custom component, I import mx.core.Application; so the compiler doesn't yell

    Read the article

  • Custom ASP.NET MVC cache controllers in a shared hosting environment?

    - by Daniel Crenna
    I'm using custom controllers that cache static resources (CSS, JS, etc.) and images. I'm currently working with a hosting provider that has set me up under a full trust profile. Despite being in full trust, my controllers fail because the caching strategy relies on the File class to directly open a resource file prior to treatment and storage in memory. Is this something that would likely occur in all full trust shared hosting environments or is this specific to my host? The static files live within my application's structure and not in an arbitrary server path. It seems to me that custom caching would require code to access the file directly, and am hoping someone else has dealt with this issue.

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >