Search Results

Search found 1086 results on 44 pages for 'persist'.

Page 11/44 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Building a better .NET Application Configuration Class - revisited

    - by Rick Strahl
    Managing configuration settings is an important part of successful applications. It should be easy to ensure that you can easily access and modify configuration values within your applications. If it's not - well things don't get parameterized as much as they should. In this post I discuss a custom Application Configuration class that makes it super easy to create reusable configuration objects in your applications using a code-first approach and the ability to persist configuration information into various types of configuration stores.

    Read the article

  • On StringComparison Values

    - by Jesse
    When you use the .NET Framework’s String.Equals and String.Compare methods do you use an overloStringComparison enumeration value? If not, you should be because the value provided for that StringComparison argument can have a big impact on the results of your string comparison. The StringComparison enumeration defines values that fall into three different major categories: Culture-sensitive comparison using a specific culture, defaulted to the Thread.CurrentThread.CurrentCulture value (StringComparison.CurrentCulture and StringComparison.CurrentCutlureIgnoreCase) Invariant culture comparison (StringComparison.InvariantCulture and StringComparison.InvariantCultureIgnoreCase) Ordinal (byte-by-byte) comparison of  (StringComparison.Ordinal and StringComparison.OrdinalIgnoreCase) There is a lot of great material available that detail the technical ins and outs of these different string comparison approaches. If you’re at all interested in the topic these two MSDN articles are worth a read: Best Practices For Using Strings in the .NET Framework: http://msdn.microsoft.com/en-us/library/dd465121.aspx How To Compare Strings: http://msdn.microsoft.com/en-us/library/cc165449.aspx Those articles cover the technical details of string comparison well enough that I’m not going to reiterate them here other than to say that the upshot is that you typically want to use the culture-sensitive comparison whenever you’re comparing strings that were entered by or will be displayed to users and the ordinal comparison in nearly all other cases. So where does that leave the invariant culture comparisons? The “Best Practices For Using Strings in the .NET Framework” article has the following to say: “On balance, the invariant culture has very few properties that make it useful for comparison. It does comparison in a linguistically relevant manner, which prevents it from guaranteeing full symbolic equivalence, but it is not the choice for display in any culture. One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display. For example, if a large data file that contains a list of sorted identifiers for display accompanies an application, adding to this list would require an insertion with invariant-style sorting.” I don’t know about you, but I feel like that paragraph is a bit lacking. Are there really any “real world” reasons to use the invariant culture comparison? I think the answer to this question is, “yes”, but in order to understand why we should first think about what the invariant culture comparison really does. The invariant culture comparison is really just a culture-sensitive comparison using a special invariant culture (Michael Kaplan has a great post on the history of the invariant culture on his blog: http://blogs.msdn.com/b/michkap/archive/2004/12/29/344136.aspx). This means that the invariant culture comparison will apply the linguistic customs defined by the invariant culture which are guaranteed not to differ between different machines or execution contexts. This sort of consistently does prove useful if you needed to maintain a list of strings that are sorted in a meaningful and consistent way regardless of the user viewing them or the machine on which they are being viewed. Example: Prototype Names Let’s say that you work for a large multi-national toy company with branch offices in 10 different countries. Each year the company would work on 15-25 new toy prototypes each of which is assigned a “code name” while it is under development. Coming up with fun new code names is a big part of the company culture that everyone really enjoys, so to be fair the CEO of the company spent a lot of time coming up with a prototype naming scheme that would be fun for everyone to participate in, fair to all of the different branch locations, and accessible to all members of the organization regardless of the country they were from and the language that they spoke. Each new prototype will get a code name that begins with a letter following the previously created name using the alphabetical order of the Latin/Roman alphabet. Each new year prototype names would start back at “A”. The country that leads the prototype development effort gets to choose the name in their native language. (An appropriate Romanization system will be used for countries where the primary language is not written in the Latin/Roman alphabet. For example, the Pinyin system could be used for Chinese). To avoid repeating names, a list of all current and past prototype names will be maintained on each branch location’s company intranet site. Assuming that maintaining a single pre-sorted list is not feasible among all of the highly distributed intranet implementations, what string comparison method would you use to sort each year’s list of prototype names so that the list is both meaningful and consistent regardless of the country within which the list is being viewed? Sorting the list with a culture-sensitive comparison using the default configured culture on each country’s intranet server the list would probably work most of the time, but subtle differences between cultures could mean that two different people would see a list that was sorted slightly differently. The CEO wants the prototype names to be a unifying aspect of company culture and is adamant that everyone see the the same list sorted in the same order and there’s no way to guarantee a consistent sort across different cultures using the culture-sensitive string comparison rules. The culture-sensitive sort would produce a meaningful list for the specific user viewing it, but it wouldn’t always be consistent between different users. Sorting with the ordinal comparison would certainly be consistent regardless of the user viewing it, but would it be meaningful? Let’s say that the current year’s prototype name list looks like this: Antílope (Spanish) Babouin (French) Cahoun (Czech) Diamond (English) Flosse (German) If you were to sort this list using ordinal rules you’d end up with: Antílope Babouin Diamond Flosse Cahoun This sort is no good because the entry for “C” appears the bottom of the list after “F”. This is because the Czech entry for the letter “C” makes use of a diacritic (accent mark). The ordinal string comparison does a byte-by-byte comparison of the code points that make up each character in the string and the code point for the “C” with the diacritic mark is higher than any letter without a diacritic mark, which pushes that entry to the bottom of the sorted list. The CEO wants each country to be able to create prototype names in their native language, which means we need to allow for names that might begin with letters that have diacritics, so ordinal sorting kills the meaningfulness of the list. As it turns out, this situation is actually well-suited for the invariant culture comparison. The invariant culture accounts for linguistically relevant factors like the use of diacritics but will provide a consistent sort across all machines that perform the sort. Now that we’ve walked through this example, the following line from the “Best Practices For Using Strings in the .NET Framework” makes a lot more sense: One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display That line describes the prototype name example perfectly: we need a way to persist ordered data for a cross-culturally identical display. While this example is 100% made-up, I think it illustrates that there are indeed real-world situations where the invariant culture comparison is useful.

    Read the article

  • Restricting logons during certain hours for certain users

    - by simonsabin
    Following a an email in a DL I decided to look at implementing a logon restriction system to prevent users from logging on at certain ties of the day. The poster had a solution but wanted to add auditing. I immediately thought of the My post on logging messages during a transaction because I new that part of the logon trigger functionality is that you rollback the connection. I therefore assumed you had to do the logging like I talk about in that post (otherwise the logging wouldn’t persist beyond...(read more)

    Read the article

  • Is RTD Stateless or Stateful?

    - by [email protected]
    Yes.   A stateless service is one where each request is an independent transaction that can be processed by any of the servers in a cluster.  A stateful service is one where state is kept in a server's memory from transaction to transaction, thus necessitating the proper routing of requests to the right server. The main advantage of stateless systems is simplicity of design. The main advantage of stateful systems is performance. I'm often asked whether RTD is a stateless or stateful service, so I wanted to clarify this issue in depth so that RTD's architecture will be properly understood. The short answer is: "RTD can be configured as a stateless or stateful service." The performance difference between stateless and stateful systems can be very significant, and while in a call center implementation it may be reasonable to use a pure stateless configuration, a web implementation that produces thousands of requests per second is practically impossible with a stateless configuration. RTD's performance is orders of magnitude better than most competing systems. RTD was architected from the ground up to achieve this performance. Features like automatic and dynamic compression of prediction models, automatic translation of metadata to machine code, lack of interpreted languages, and separation of model building from decisioning contribute to achieving this performance level. Because  of this focus on performance we decided to have RTD's default configuration work in a stateful manner. By being stateful RTD requests are typically handled in a few milliseconds when repeated requests come to the same session. Now, those readers that have participated in implementations of RTD know that RTD's architecture is also focused on reducing Total Cost of Ownership (TCO) with features like automatic model building, automatic time windows, automatic maintenance of database tables, automatic evaluation of data mining models, automatic management of models partitioned by channel, geography, etcetera, and hot swapping of configurations. How do you reconcile the need for a low TCO and the need for performance? How do you get the performance of a stateful system with the simplicity of a stateless system? The answer is that you make the system behave like a stateless system to the exterior, but you let it automatically take advantage of situations where being stateful is better. For example, one of the advantages of stateless systems is that you can route a message to any server in a cluster, without worrying about sending it to the same server that was handling the session in previous messages. With an RTD stateful configuration you can still route the message to any server in the cluster, so from the point of view of the configuration of other systems, it is the same as a stateless service. The difference though comes in performance, because if the message arrives to the right server, RTD can serve it without any external access to the session's state, thus tremendously reducing processing time. In typical implementations it is not rare to have high percentages of messages routed directly to the right server, while those that are not, are easily handled by forwarding the messages to the right server. This architecture usually provides the best of both worlds with performance and simplicity of configuration.   Configuring RTD as a pure stateless service A pure stateless configuration requires session data to be persisted at the end of handling each and every message and reloading that data at the beginning of handling any new message. This is of course, the root of the inefficiency of these configurations. This is also the reason why many "stateless" implementations actually do keep state to take advantage of a request coming back to the same server. Nevertheless, if the implementation requires a pure stateless decision service, this is easy to configure in RTD. The way to do it is: Mark every Integration Point to Close the session at the end of processing the message In the Session entity persist the session data on closing the session In the session entity check if a persisted version exists and load it An excellent solution for persisting the session data is Oracle Coherence, which provides a high performance, distributed cache that minimizes the performance impact of persisting and reloading the session. Alternatively, the session can be persisted to a local database. An interesting feature of the RTD stateless configuration is that it can cope with serializing concurrent requests for the same session. For example, if a web page produces two requests to the decision service, these requests could come concurrently to the decision services and be handled by different servers. Most stateless implementation would have the two requests step onto each other when saving the state, or fail one of the messages. When properly configured, RTD will make one message wait for the other before processing.   A Word on Context Using the context of a customer interaction typically significantly increases lift. For example, offer success in a call center could double if the context of the call is taken into account. For this reason, it is important to utilize the contextual information in decision making. To make the contextual information available throughout a session it needs to be persisted. When there is a well defined owner for the information then there is no problem because in case of a session restart, the information can be easily retrieved. If there is no official owner of the information, then RTD can be configured to persist this information.   Once again, RTD provides flexibility to ensure high performance when it is adequate to allow for some loss of state in the rare cases of server failure. For example, in a heavy use web site that serves 1000 pages per second the navigation history may be stored in the in memory session. In such sites it is typical that there is no OLTP that stores all the navigation events, therefore if an RTD server were to fail, it would be possible for the navigation to that point to be lost (note that a new session would be immediately established in one of the other servers). In most cases the loss of this navigation information would be acceptable as it would happen rarely. If it is desired to save this information, RTD would persist it every time the visitor navigates to a new page. Note that this practice is preferred whether RTD is configured in a stateless or stateful manner.  

    Read the article

  • Encrypted Hidden Redux : Let's Get Salty

    - by HeartattacK
    In this article, Ashic Mahtab shows an elegant, reusable and unobtrusive way in which to persist sensitive data to the browser in hidden inputs and restoring them on postback without needing to change any code in controllers or actions. The approach is an improvement of his previous article and incorporates a per session salt during encryption. Note: Cross posted from Heartysoft.com. Permalink

    Read the article

  • Bazaar issues while installing Ubuntu TV

    - by Aleksi Kinnunen
    I tried to install Ubuntu TV in Ubuntu 12.04 with this guide: https://wiki.ubuntu.com/UbuntuTV/Contributing. Everything had been OK until I wrote to the terminal bzr branch lp:~s-team/ubuntutv/trunk ubuntu-tv. It says: Permission denied (publickey). ConnectionReset reading response for 'BzrDir.open_2.1', retrying Permission denied (publickey). bzr: ERROR: Connection closed: Unexpected end of message. Please check connectivity and permissions, and report a bug if problems persist.

    Read the article

  • Settings are not being saved

    - by Gavin Panella
    Several weeks ago I updated a Facebook and Google accounts in Empathy to use per-application passwords. However, it forgets these passwords on each reboot and I have to enter them again. However, settings in Edit with Emacs in Chromium do not persist either, so I think this is more than a bug in Empathy. I tried mv ~/.gconf ~/.gconf.bak, but that didn't work. What other common denominators are there that might be at the root of this?

    Read the article

  • Commnunity Technology Update (CTU) 2011

    - by Aman Garg
    Spoke at the session on Webforms in CTU 2011 (Community Technology Update) in Singapore. Had a good interaction with the Developer community here in Singapore. I covered the following topics during the session:   *Dynamic Data *Routing *Web Form Additions         *Predictable Client IDs          *Programmable Meta Data           *Better control over ViewState           *Persist selected rows *Web Deployment   The Slide Deck used can be accessed using the following URL: http://www.slideshare.net/amangarg516/web-forms-im-still-alive

    Read the article

  • Sound works but cannot change settings or volume

    - by W. Conrad Walden
    Volume is always on max. The volume indicator is showing three dashes and does not show anything when I click on it. Clicking the "sound" button in settings causes it to recursively reopen settings. pulseaudio --start doesn't help. EDIT: This has only started happening after the upgrade to 13.10 EDIT 2: This is in Unity, yes. EDIT 3: killall unity-panel-service works and does reload the panels, but previous problems persist.

    Read the article

  • Notification / tray icon / applet drop downs disappear or flicker when clicked

    - by postfuturist
    For some reason, after upgrading to 11.10, the tray icon drop-down menus don't persist after a single click about 2/3 of the time. They always work if I click-and-hold, but I'm used to just clicking once to examine the menu. The behavior is not consistent, so the drop down menus will stay after click about 1 in 3 times. I'm running 64 bit Ubuntu on a dual-monitor setup. EDIT: from lspci: 01:00.0 VGA compatible controller: nVidia Corporation M116N (rev a2)

    Read the article

  • Syncing client and server CRUD operations using json and php

    - by Justin
    I'm working on some code to sync the state of models between client (being a javascript application) and server. Often I end up writing redundant code to track the client and server objects so I can map the client supplied data to the server models. Below is some code I am thinking about implementing to help. What I don't like about the below code is that this method won't handle nested relationships very well, I would have to create multiple object trackers. One work around is for each server model after creating or loading, simply do $model->clientId = $clientId; IMO this is a nasty hack and I want to avoid it. Adding a setCientId method to all my model object would be another way to make it less hacky, but this seems like overkill to me. Really clientIds are only good for inserting/updating data in some scenarios. I could go with a decorator pattern but auto generating a proxy class seems a bit involved. I could use a generic proxy class that uses a __call function to allow for original object data to be accessed, but this seems wrong too. Any thoughts or comments? $clientData = '[{name: "Bob", action: "update", id: 1, clientId: 200}, {name:"Susan", action:"create", clientId: 131} ]'; $jsonObjs = json_decode($clientData); $objectTracker = new ObjectTracker(); $objectTracker->trackClientObjs($jsonObjs); $query = $this->em->createQuery("SELECT x FROM Application_Model_User x WHERE x.id IN (:ids)"); $query->setParameters("ids",$objectTracker->getClientSpecifiedServerIds()); $models = $query->getResults(); //Apply client data to server model foreach ($models as $model) { $clientModel = $objectTracker->getClientJsonObj($model->getId()); ... } //Create new models and persist foreach($objectTracker->getNewClientObjs() as $newClientObj) { $model = new Application_Model_User(); .... $em->persist($model); $objectTracker->trackServerObj($model); } $em->flush(); $resourceResponse = $objectTracker->createResourceResponse(); //Id mappings will be an associtave array representing server id resources with client side // id. //This method Dosen't seem to flexible if we want to return additional data with each resource... //Would have to modify the returned data structure, seems like tight coupling... //Ex return value: //[{clientId: 200, id:1} , {clientId: 131, id: 33}];

    Read the article

  • Are separate business objects needed when persistent data can be stored in a usable format?

    - by Kylotan
    I have a system where data is stored in a persistent store and read by a server application. Some of this data is only ever seen by the server, but some of it is passed through unaltered to clients. So, there is a big temptation to persist data - whether whole rows/documents or individual fields/sub-documents - in the exact form that the client can use (eg. JSON), as this removes various layers of boilerplate, whether in the form of procedural SQL, an ORM, or any proxy structure which exists just to hold the values before having to re-encode them into a client-suitable form. This form can usually be used on the server too, though business logic may have to live outside of the object, On the other hand, this approach ends up leaking implementation details everywhere. 9 times out of 10 I'm happy just to read a JSON structure out of the DB and send it to the client, but 1 in every 10 times I have to know the details of that implicit structure (and be able to refactor access to it if the stored data ever changes). And this makes me think that maybe I should be pulling this data into separate business objects, so that business logic doesn't have to change when the data schema does. (Though you could argue this just moves the problem rather than solves it.) There is a complicating factor in that our data schema is constantly changing rapidly, to the point where we dropped our previous ORM/RDBMS system in favour of MongoDB and an implicit schema which was much easier to work with. So far I've not decided whether the rapid schema changes make me wish for separate business objects (so that server-side calculations need less refactoring, since all changes are restricted to the persistence layer) or for no separate business objects (because every change to the schema requires the business objects to change to stay in sync, even if the new sub-object or field is never used on the server except to pass verbatim to a client). So my question is whether it is sensible to store objects in the form they are usually going to be used, or if it's better to copy them into intermediate business objects to insulate both sides from each other (even when that isn't strictly necessary)? And I'd like to hear from anybody else who has had experience of a similar situation, perhaps choosing to persist XML or JSON instead of having an explicit schema which has to be assembled into a client format each time.

    Read the article

  • A Simple Entity Tagger

    - by Elton Stoneman
    In the REST world, ETags are your gateway to performance boosts by letting clients cache responses. In the non-REST world, you may also want to add an ETag to an entity definition inside a traditional service contract – think of a scenario where a consumer persists its own representation of your entity, and wants to keep it in sync. Rather than load every entity by ID and check for changes, the consumer can send in a set of linked IDs and ETags, and you can return only the entities where the current ETag is different from the consumer’s version.  If your entity is a projection from various sources, you may not have a persistent ETag, so you need an efficient way to generate an ETag which is deterministic, so an entity with the same state always generates the same ETag. I have an implementation for a generic ETag generator on GitHub here: EntityTagger code sample. The essence is simple - we get the entity, serialize it and build a hash from the serialized value. Any changes to either the state or the structure of the entity will result in a different hash. To use it, just call SetETag, passing your populated object and a Func<> which acts as an accessor to the ETag property: EntityTagger.SetETag(user, x => x.ETag); The implementation is all in at 80 lines of code, which is all pretty straightforward: var eTagProperty = AsPropertyInfo(eTagPropertyAccessor); var originalETag = eTagProperty.GetValue(entity, null); try { ResetETag(entity, eTagPropertyAccessor); string json; var serializer = new DataContractJsonSerializer(entity.GetType()); using (var stream = new MemoryStream()) { serializer.WriteObject(stream, entity); json = Encoding.UTF8.GetString(stream.GetBuffer(), 0, (int)stream.Length); } var guid = GetDeterministicGuid(json); eTagProperty.SetValue(entity, guid.ToString(), null); //... There are a couple of helper methods to check if the object has changed since the ETag value was last set, and to reset the ETag. This implementation uses JSON to do the serializing rather than XML. Benefit - should be marginally more efficient as your hashing a much smaller serialized string; downside, JSON doesn't include namespaces or class names at the root level, so if you have two classes with the exact same structure but different names, then instances which have the same content will have the same ETag. You may want that behaviour, but change to use the XML DataContractSerializer if you think that will be an issue. If you can persist the ETag somewhere, it will save you server processing to load up the entity, but that will only apply to scenarios where you can reliably invalidate your ETag (e.g. if you control all the entry points where entity contents can be updated, then you can calculate and persist the new ETag with each update).

    Read the article

  • Vmware Server Console stops responding to clicks

    - by ProfeDiego
    I had installed vmware-server-console just fine. But when yoou open it an as soon as you click in a virtual machine, thats it, the app stop working but not freeze, it just stops recognizing where you click, everywhere you click it puts the cursor inside the VM, you cant switch VMs, cant do rigth clic on the left menu, theres not even the toolbar. I try login in with a classic ubuntu session, but this behavior persist. Is there a way to fix this.

    Read the article

  • Kernel panic - not syncing: no init found. Try passing init=option to kernel

    - by deepak
    I formatted all the partitions in my computer to a single ext4 partition and did a fresh installation of Ubuntu 13.04. I'm getting: Failed to execute /init Kernel panic - not syncing: no init found. Try passing init=option to kernel. Even on clean reinstall the issue persist. Booting from recovery mode leads to same error, so not able to reach the terminal. But able to boot from Live CD. Any help much appreciated.

    Read the article

  • OpenVPN via DD-WRT

    - by user140491
    I am using DD-WRT with my Buffalo G300NH. I notice in my log files: Wed Oct 10 01:08:25 2012 us=343000 Cannot open /tmp/openvpn/dh.pem for DH parameters: error:02001003:system library:fopen:No such process: error:2006D080:BIO routines:BIO_new_file:no such file I have looked at other answers regarding this error. I have tried to no avail. 755 are chmod rights to /tmp/openvpn. At this point, I can not connect outside my LAN via OpenVPN. My server config looks like this: #mode server #tls-server push "route 192.168.11.1 255.255.255.0" push "dhcp-option DNS 10.8.0.1" server 10.8.0.0 255.255.255.0 port 1194 proto udp dev tun0 ifconfig 10.8.0.1 10.8.0.2 #secret /tmp/static.key ca /tmp/openvpn/ca.crt cert /tmp/openvpn/cert.pem key /tmp/openvpn/key.pem dh /tmp/openvpn/dh.pem keepalive 10 120 comp-lzo persist-key persist-tun verb 5 management localhost 5001 Can someone, knowledgeable, of this error kindly help? i have been going on several days, trying to sort it out. I like all nighters though!!

    Read the article

  • Communicating via Command Mode with IBM HS22 IMM via AMM

    - by MikeyB
    On previous model blades that contained a BMC, I was able to communicate from our external management station via pass-through commands to the BMC to do things such as power blades on/off, set VPD parameters, reboot the BMC, etc. Now on the HS22, a bunch of things happen differently. For example, we can no longer use the same pass-through commands to write VPD information pages and have them persist across reboots of the IMM - it looks as though those VPD pages are populated from information contained in the IMM. How do we use the Advanced Settings Utility from an external host to communicate with HS22 IMMs? Alternatively, what TCP Command Mode commands do we need to send to the AMM to communicate with the IMM? For our purposes, we specifically cannot communicate with the IMM from the blade itself. Specific example: When I send a pass-thru IPMI command via the AMM to the blade BMC to write information (such as MTM, Serial) into VPD page 0x10, it persists on blades with a BMC (HS21 for example). I can send the same IPMI command to write data to the VPD page on the HS22, however it does not persist across reboots of the IMM. What IPMI commands do I need to send to the IMM? What IPMI commands are asu sending when it sets the MTM & Serial?

    Read the article

  • Single application through OpenVPN tunnel (Debian Lenny)

    - by mikael
    I'm using Debian Lenny and I want to tunnel rtorrent only through a OpenVPN tunnel. I have a tunnel running, the config file looks like this: client dev tun proto udp remote openvpn.xxx.com 1194 resolv-retry infinite nobind persist-key persist-tun ca /etc/openvpn/xxx/keys/ca.crt cert /etc/openvpn/xxx/keys/client.crt key /etc/openvpn/xxx/keys/client.key tls-auth /etc/openvpn/xxx/keys/tls.key 1 ns-cert-type server comp-lzo verb 3 auth-user-pass script-security 3 reneg-sec 0 My idea is that I could run a sockd proxy internally that redirects traffic to the openvpn tunnel. I could use the *nix "proxifier" application "tsocks" to make it possible for rtorrent to connect through that proxy (as rtorrent doesn't support proxies). I have trouble configuring sockd as my IP inside the VPN changes every time I connect. This is a config file someone said would help: http://ircpimps.org/sockd.conf As my IP changes at each connect I don't know what to put in that config file. I have no control over the host side config file. Any help wanted. Any other method is very welcome.

    Read the article

  • OpenVPN Client timing out

    - by Austin
    I recently installed OpenVPN on my Ubuntu VPS. Whenenver I try to connect to it, I can establish a connection just fine. However, everything I try to connect to times out. If I try to ping something, it will resolve the IP, but will time out after resolving the IP. (So DNS Server seems to be working correctly) My server.conf has this relevant information (At least I think it's relevant. I'm not sure if you need more or not) # Which local IP address should OpenVPN # listen on? (optional) ;local a.b.c.d # Which TCP/UDP port should OpenVPN listen on? # If you want to run multiple OpenVPN instances # on the same machine, use a different port # number for each one. You will need to # open up this port on your firewall. port 1194 # TCP or UDP server? ;proto tcp proto udp # "dev tun" will create a routed IP tunnel, # "dev tap" will create an ethernet tunnel. # Use "dev tap0" if you are ethernet bridging # and have precreated a tap0 virtual interface # and bridged it with your ethernet interface. # If you want to control access policies # over the VPN, you must create firewall # rules for the the TUN/TAP interface. # On non-Windows systems, you can give # an explicit unit number, such as tun0. # On Windows, use "dev-node" for this. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel if you # have more than one. On XP SP2 or higher, # you may need to selectively disable the # Windows firewall for the TAP adapter. # Non-Windows systems usually don't need this. ;dev-node MyTap # SSL/TLS root certificate (ca), certificate # (cert), and private key (key). Each client # and the server must have their own cert and # key file. The server and all clients will # use the same ca file. # # See the "easy-rsa" directory for a series # of scripts for generating RSA certificates # and private keys. Remember to use # a unique Common Name for the server # and each of the client certificates. # # Any X509 key management system can be used. # OpenVPN can also use a PKCS #12 formatted key file # (see "pkcs12" directive in man page). ca ca.crt cert server.crt key server.key # This file should be kept secret # Diffie hellman parameters. # Generate your own with: # openssl dhparam -out dh1024.pem 1024 # Substitute 2048 for 1024 if you are using # 2048 bit keys. dh dh1024.pem # Configure server mode and supply a VPN subnet # for OpenVPN to draw client addresses from. # The server will take 10.8.0.1 for itself, # the rest will be made available to clients. # Each client will be able to reach the server # on 10.8.0.1. Comment this line out if you are # ethernet bridging. See the man page for more info. server 10.8.0.0 255.255.255.0 # Maintain a record of client <-> virtual IP address # associations in this file. If OpenVPN goes down or # is restarted, reconnecting clients can be assigned # the same virtual IP address from the pool that was # previously assigned. ifconfig-pool-persist ipp.txt # Configure server mode for ethernet bridging. # You must first use your OS's bridging capability # to bridge the TAP interface with the ethernet # NIC interface. Then you must manually set the # IP/netmask on the bridge interface, here we # assume 10.8.0.4/255.255.255.0. Finally we # must set aside an IP range in this subnet # (start=10.8.0.50 end=10.8.0.100) to allocate # to connecting clients. Leave this line commented # out unless you are ethernet bridging. ;server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100 # Configure server mode for ethernet bridging # using a DHCP-proxy, where clients talk # to the OpenVPN server-side DHCP server # to receive their IP address allocation # and DNS server addresses. You must first use # your OS's bridging capability to bridge the TAP # interface with the ethernet NIC interface. # Note: this mode only works on clients (such as # Windows), where the client-side TAP adapter is # bound to a DHCP client. ;server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. ;push "route 192.168.10.0 255.255.255.0" ;push "route 192.168.20.0 255.255.255.0" # To assign specific IP addresses to specific # clients or if a connecting client has a private # subnet behind it that should also have VPN access, # use the subdirectory "ccd" for client-specific # configuration files (see man page for more info). # EXAMPLE: Suppose the client # having the certificate common name "Thelonious" # also has a small subnet behind his connecting # machine, such as 192.168.40.128/255.255.255.248. # First, uncomment out these lines: ;client-config-dir ccd ;route 192.168.40.128 255.255.255.248 # Then create a file ccd/Thelonious with this line: # iroute 192.168.40.128 255.255.255.248 # This will allow Thelonious' private subnet to # access the VPN. This example will only work # if you are routing, not bridging, i.e. you are # using "dev tun" and "server" directives. # EXAMPLE: Suppose you want to give # Thelonious a fixed VPN IP address of 10.9.0.1. # First uncomment out these lines: ;client-config-dir ccd ;route 10.9.0.0 255.255.255.252 # Then add this line to ccd/Thelonious: # ifconfig-push 10.9.0.1 10.9.0.2 # Suppose that you want to enable different # firewall access policies for different groups # of clients. There are two methods: # (1) Run multiple OpenVPN daemons, one for each # group, and firewall the TUN/TAP interface # for each group/daemon appropriately. # (2) (Advanced) Create a script to dynamically # modify the firewall in response to access # from different clients. See man # page for more info on learn-address script. ;learn-address ./script # If enabled, this directive will configure # all clients to redirect their default # network gateway through the VPN, causing # all IP traffic such as web browsing and # and DNS lookups to go through the VPN # (The OpenVPN server machine may need to NAT # or bridge the TUN/TAP interface to the internet # in order for this to work properly). push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" # Certain Windows-specific network settings # can be pushed to clients, such as DNS # or WINS server addresses. CAVEAT: # http://openvpn.net/faq.html#dhcpcaveats # The addresses below refer to the public # DNS servers provided by opendns.com. ;push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" # Uncomment this directive to allow different # clients to be able to "see" each other. # By default, clients will only see the server. # To force clients to only see the server, you # will also need to appropriately firewall the # server's TUN/TAP interface. ;client-to-client # Uncomment this directive if multiple clients # might connect with the same certificate/key # files or common names. This is recommended # only for testing purposes. For production use, # each client should have its own certificate/key # pair. # # IF YOU HAVE NOT GENERATED INDIVIDUAL # CERTIFICATE/KEY PAIRS FOR EACH CLIENT, # EACH HAVING ITS OWN UNIQUE "COMMON NAME", # UNCOMMENT THIS LINE OUT. ;duplicate-cn # The keepalive directive causes ping-like # messages to be sent back and forth over # the link so that each side knows when # the other side has gone down. # Ping every 10 seconds, assume that remote # peer is down if no ping received during # a 120 second time period. keepalive 10 120 # For extra security beyond that provided # by SSL/TLS, create an "HMAC firewall" # to help block DoS attacks and UDP port flooding. # # Generate with: # openvpn --genkey --secret ta.key # # The server and each client must have # a copy of this key. # The second parameter should be '0' # on the server and '1' on the clients. ;tls-auth ta.key 0 # This file is secret # Select a cryptographic cipher. # This config item must be copied to # the client config file as well. ;cipher BF-CBC # Blowfish (default) ;cipher AES-128-CBC # AES ;cipher DES-EDE3-CBC # Triple-DES # Enable compression on the VPN link. # If you enable it here, you must also # enable it in the client config file. comp-lzo # The maximum number of concurrently connected # clients we want to allow. ;max-clients 100 # It's a good idea to reduce the OpenVPN # daemon's privileges after initialization. # # You can uncomment this out on # non-Windows systems. ;user nobody ;group nogroup # The persist options will try to avoid # accessing certain resources on restart # that may no longer be accessible because # of the privilege downgrade. persist-key persist-tun # Output a short status file showing # current connections, truncated # and rewritten every minute. status openvpn-status.log # By default, log messages will go to the syslog (or # on Windows, if running as a service, they will go to # the "\Program Files\OpenVPN\log" directory). # Use log or log-append to override this default. # "log" will truncate the log file on OpenVPN startup, # while "log-append" will append to it. Use one # or the other (but not both). ;log openvpn.log ;log-append openvpn.log # Set the appropriate level of log # file verbosity. # # 0 is silent, except for fatal errors # 4 is reasonable for general usage # 5 and 6 can help to debug connection problems # 9 is extremely verbose verb 3 # Silence repeating messages. At most 20 # sequential messages of the same message # category will be output to the log. ;mute 20 I've tried on multiple computers by the way. The same result on all of them. What could be wrong? Thanks in advance, and if you need other information I'll gladly post it. Information for new comments root@vps:~# iptables -L -n -v Chain INPUT (policy ACCEPT 862K packets, 51M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 3 packets, 382 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 4641 298K ACCEPT all -- * * 10.8.0.0/24 0.0.0.0/0 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 1671K packets, 2378M bytes) pkts bytes target prot opt in out source destination And root@vps:~# iptables -t nat -L -n -v Chain PREROUTING (policy ACCEPT 17937 packets, 2013K bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 8975 packets, 562K bytes) pkts bytes target prot opt in out source destination 1579 103K SNAT all -- * * 10.8.0.0/24 0.0.0.0/0 to:SERVERIP Chain OUTPUT (policy ACCEPT 8972 packets, 562K bytes) pkts bytes target prot opt in out source destination

    Read the article

  • maximum number of connections Squid

    - by Isaac
    I have a Squid proxy server that controls all internet traffic for my network. I need a way to stop users from downloading big files (say 50MB) in my network. I banned some famous ports (e.g. torrent) but some downloads are possible by HTTP port. Obviously I cannot ban port 80! A simple solution is limiting maxmimum number of the simultaneous connections for each IP (e.g. 3 connections). It's possible in Squid with this config: acl ACCOUNTSDEPT 192.168.5.0/24 acl limitusercon maxconn 3 http_access deny ACCOUNTSDEPT limitusercon But this solution has really bad impact in web browsing, because any smart browser get different parts of a website by several connections simultaneously to speedup web browsing. But if we have a maximum number of connections, the browsers will fail to get some parts and the website will be shown partially and some parts/images/frames will not be shown. So, can we limit maximum number of persist connections? I think this policy will works: Specify Maximum number of connections that is alive for 10 seconds But Number of simultaneous connections for every IP is unlimited But how can we implement this policy when Squid? With which config? UPDATE: artifex and Tom Newton offered using a bandwidth-limiting approach to fight against downloaders. But bandwidth-limiting in Squid has a shortcoming: It's static and cannot dynamically change. So a person has a limited bandwidth not matter how many people are using internet (maybe nobody!) Also, this solution cannot help to stop people from downloading. They still can download but in a lower speed. But if we find a way to terminate persist connections (or any connection that is alive more than a specific time), downloading big files will be almost impossible (always there is some way!)

    Read the article

  • Is this hard disk dead?

    - by Korjavin Ivan
    Not sure, is this right site for this Q, but let me try Last time i have problem with hard disk. Sometimes its do strange sound, and i get it from logs: $dmesg | grep ata4 [29409.945516] ata4.00: exception Emask 0x10 SAct 0xf SErr 0x90202 action 0xe frozen [29409.945529] ata4.00: irq_stat 0x00400000, PHY RDY changed [29409.945538] ata4: SError: { RecovComm Persist PHYRdyChg 10B8B } [29409.945546] ata4.00: failed command: READ FPDMA QUEUED [29409.945562] ata4.00: cmd 60/30:00:56:22:5f/00:00:00:00:00/40 tag 0 ncq 24576 in [29409.945573] ata4.00: status: { DRDY } [29409.945580] ata4.00: failed command: READ FPDMA QUEUED [29409.945594] ata4.00: cmd 60/18:08:8e:22:5f/00:00:00:00:00/40 tag 1 ncq 12288 in [29409.945605] ata4.00: status: { DRDY } [29409.945611] ata4.00: failed command: READ FPDMA QUEUED [29409.945625] ata4.00: cmd 60/08:10:46:02:66/00:00:00:00:00/40 tag 2 ncq 4096 in [29409.945635] ata4.00: status: { DRDY } [29409.945641] ata4.00: failed command: READ FPDMA QUEUED [29409.945656] ata4.00: cmd 60/80:18:ee:04:66/00:00:00:00:00/40 tag 3 ncq 65536 in [29409.945666] ata4.00: status: { DRDY } [29409.945679] ata4: hard resetting link [29413.976083] ata4: softreset failed (device not ready) [29413.976097] ata4: applying SB600 PMP SRST workaround and retrying [29414.148070] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [29414.184986] ata4.00: SB600 AHCI: limiting to 255 sectors per cmd [29414.243280] ata4.00: SB600 AHCI: limiting to 255 sectors per cmd [29414.243292] ata4.00: configured for UDMA/133 [29414.243324] ata4: EH complete [680674.804563] ata4: exception Emask 0x50 SAct 0x0 SErr 0x90a02 action 0xe frozen [680674.804575] ata4: irq_stat 0x00400000, PHY RDY changed [680674.804584] ata4: SError: { RecovComm Persist HostInt PHYRdyChg 10B8B } [680674.804603] ata4: hard resetting link [680678.840561] ata4: softreset failed (device not ready) Is this ata4 sata hard drive dead? Must i change it ASAP ? Need I specify more info?

    Read the article

  • Single application through OpenVPN tunnel (Debian Lenny)

    - by user14124
    I'm using Debian Lenny and I want to tunnel rtorrent only through a OpenVPN tunnel. I have a tunnel running, the config file looks like this: client dev tun proto udp remote openvpn.xxx.com 1194 resolv-retry infinite nobind persist-key persist-tun ca /etc/openvpn/xxx/keys/ca.crt cert /etc/openvpn/xxx/keys/client.crt key /etc/openvpn/xxx/keys/client.key tls-auth /etc/openvpn/xxx/keys/tls.key 1 ns-cert-type server comp-lzo verb 3 auth-user-pass script-security 3 reneg-sec 0 My idea is that I could run a sockd proxy internally that redirects traffic to the openvpn tunnel. I could use the *nix "proxifier" application "tsocks" to make it possible for rtorrent to connect through that proxy (as rtorrent doesn't support proxies). I have trouble configuring sockd as my IP inside the VPN changes every time I connect. This is a config file someone said would help: http://ircpimps.org/sockd.conf As my IP changes at each connect I don't know what to put in that config file. I have no control over the host side config file. Any help wanted. Any other method is very welcome.

    Read the article

  • Automatically reconnect to VPN when it drops

    - by IAmAI
    I use OpenVPN to connect to a VPN service. I will often use it unattended and on occasion I have come back to find the service disconnected and GUI asking for login credentials. If the connection is disconnected by the service, and not me, I'd like it to attempt to reconnect automatically with no intervention from me, and ideally, if the reconnection attempt initially fails, keep attempting to do so at regular intervals until a connection is successfully. Is there anyway to configure OpenVPN to do this? If not, can someone suggest a way of doing it with scripting (I use Windows)? Failing that, can anyone suggest a VPN solution that does this? The VPN provider supports PPTP as well as OpenVPN. I have configured OpenVPN to read login credentials from a file. Below is my config script. I have censored any details specific to the VPN provider. client dev tun proto tcp remote ???.???.??? 0000 resolv-retry infinite nobind persist-key persist-tun ca ???.???.??? verb 3 mute-replay-warnings float reneg-sec 0 auth-user-pass auth.conf auth-nocache Thanks for your help.

    Read the article

  • Log Blog

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Logging – A log blog In a another blog (Missing Fields and Defaults) I spoke about not doing a blog about log files, but then I looked at it again and realized that this is a nice opportunity to show a simple yet powerful tool and also deal with static variables and functions in C#. My log had to be able to answer a few simple logging rules:   To log or not to log? That is the question – Always log! That is the answer  Do we share a log? Even when a file is opened with a minimal lock, it does not share well and performance greatly suffers. So sharing a log is not a good idea. Also, when sharing, it is harder to find your particular entries and you have to establish rules about retention. My recommendation – Do Not Share!  How verbose? Your log can be very verbose – a good thing when testing, very terse – a good thing in day-to-day runs, or somewhere in between. You must be the judge. In my Blog, I elect to always report a run with start and end times, and always report errors. I normally use 5 levels of logging: 4 – write all, 3 – write more, 2 – write some, 1 – write errors and timing, 0 – write none. The code sample below is more general than that. It uses the config file to set the max log level and each call to the log assigns a level to the call itself. If the level is above the .config highest level, the line will not be written. Programmers decide which log belongs to which level and thus we can set the .config differently for production and testing.  Where do I keep the log? If your career is important to you, discuss this with the boss and with the system admin. We keep logs in the L: drive of our server and make sure that we have a directory for each app that needs a log. When adding a new app, add a new directory. The default location for the log is also found in the .config file Print One or Many? There are two options here:   1.     Print many, Open but once once – you start the stream and close it only when the program ends. This is what you can do when you perform in “batch” mode like in a console app or a stsadm extension.The advantage to this is that starting a closing a stream is expensive and time consuming and because we use a unique file, keeping it open for a long time does not cause contention problems. 2.     Print one entry at a time or Open many – every time you write a line, you start the stream, write to it and close it. This work for event receivers, feature receivers, and web parts. Here scalability requires us to create objects on the fly and get rid of them as soon as possible.  A default value of the onceOrMany resides in the .config.  All of the above applies to any windows or web application, not just SharePoint.  So as usual, here is a routine that does it all, and a few simple functions that call it for a variety of purposes.   So without further ado, here is app.config  <?xml version="1.0" encoding="utf-8" ?> <configuration>     <configSections>         <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, ublicKeyToken=b77a5c561934e089" >         <section name="statics.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />         </sectionGroup>     </configSections>     <applicationSettings>         <statics.Properties.Settings>             <setting name="oneOrMany" serializeAs="String">                 <value>False</value>             </setting>             <setting name="logURI" serializeAs="String">                 <value>C:\staticLog.txt</value>             </setting>             <setting name="highestLevel" serializeAs="String">                 <value>2</value>             </setting>         </statics.Properties.Settings>     </applicationSettings> </configuration>   And now the code:  In order to persist the variables between calls and also to be able to persist (or not to persist) the log file itself, I created an EventLog class with static variables and functions. Static functions do not need an instance of the class in order to work. If you ever wondered why our Main function is static, the answer is that something needs to run before instantiation so that other objects may be instantiated, and this is what the “static” Main does. The various logging functions and variables are created as static because they do not need instantiation and as a fringe benefit they remain un-destroyed between calls. The Main function here is just used for testing. Note that it does not instantiate anything, just uses the log functions. This is possible because the functions are static. Also note that the function calls are of the form: Class.Function.  using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace statics {       class Program     {         static void Main(string[] args)         {             //write a single line             EventLog.LogEvents("ha ha", 3, "C:\\hahafile.txt", 4, true, false);             //this single line will not be written because the msgLevel is too high             EventLog.LogEvents("baba", 3, "C:\\babafile.txt", 2, true, false);             //The next 4 lines will be written in succession - no closing             EventLog.LogLine("blah blah", 1);             EventLog.LogLine("da da", 1);             EventLog.LogLine("ma ma", 1);             EventLog.LogLine("lah lah", 1);             EventLog.CloseLog(); // log will close             //now with specific functions             EventLog.LogSingleLine("one line", 1);             //this is just a test, the log is already closed             EventLog.CloseLog();         }     }     public class EventLog     {         public static string logURI = Properties.Settings.Default.logURI;         public static bool isOneLine = Properties.Settings.Default.oneOrMany;         public static bool isOpen = false;         public static int highestLevel = Properties.Settings.Default.highestLevel;         public static StreamWriter sw;         /// <summary>         /// the program will "print" the msg into the log         /// unless msgLevel is > msgLimit         /// onceOrMany is true when once - the program will open the log         /// print the msg and close the log. False when many the program will         /// keep the log open until close = true         /// normally all the arguments will come from the app.config         /// called by many overloads of logLine         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         /// <param name="logFileName"></param>         /// <param name="msgLimit"></param>         /// <param name="onceOrMany"></param>         /// <param name="close"></param>         public static void LogEvents(string msg, int msgLevel, string logFileName, int msgLimit, bool oneOrMany, bool close)         {             //to print or not to print             if (msgLevel <= msgLimit)             {                 //open the file. from the argument (logFileName) or from the config (logURI)                 if (!isOpen)                 {                     string logFile = logFileName;                     if (logFileName == "")                     {                         logFile = logURI;                     }                     sw = new StreamWriter(logFile, true);                     sw.WriteLine("Started At: " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));                     isOpen = true;                 }                 //print                 sw.WriteLine(msg);             }             //close when instructed             if (close || oneOrMany)             {                 if (isOpen)                 {                     sw.WriteLine("Ended At: " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));                     sw.Close();                     isOpen = false;                 }             }         }           /// <summary>         /// The simplest, just msg and level         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         public static void LogLine(string msg, int msgLevel)         {             //use the given msg and msgLevel and all others are defaults             LogEvents(msg, msgLevel, "", highestLevel, isOneLine, false);         }                 /// <summary>         /// one line at a time - open print close         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         public static void LogSingleLine(string msg, int msgLevel)         {             LogEvents(msg, msgLevel, "", highestLevel, true, true);         }           /// <summary>         /// used to close. high level, low limit, once and close are set         /// </summary>         /// <param name="close"></param>         public static void CloseLog()         {             LogEvents("", 15, "", 1, true, true);         }           }     }   }   That’s all folks!

    Read the article

  • Atomikos rollback doesn't clear JPA persistence context?

    - by HDave
    I have a Spring/JPA/Hibernate application and am trying to get it to pass my Junit integration tests against H2 and MySQL. Currently I am using Atomikos for transactions and C3P0 for connection pooling. Despite my best efforts my DAO integration one of the tests is failing with org.hibernate.NonUniqueObjectException. In the failing test I create an object with the "new" operator, set the ID and call persist on it. @Test @Transactional public void save_UserTestDataNewObject_RecordSetOneLarger() { int expectedNumberRecords = 4; User newUser = createNewUser(); dao.persist(newUser); List<User> allUsers = dao.findAll(0, 1000); assertEquals(expectedNumberRecords, allUsers.size()); } In the previous testmethod I do the same thing (createNewUser() is a helper method that creates an object with the same ID everytime). I am sure that creating and persisting a second object with the same Id is the cause, but each test method is in own transaction and the object I created is bound to a private test method variable. I can even see in the logs that Spring Test and Atomikos are rolling back the transaction associated with each test method. I would have thought the rollback would have also cleared the persistence context too. On a hunch, I added an a call to dao.clear() at the beginning of the faulty test method and the problem went away!! So rollback doesn't clear the persistence context??? If not, then who does?? My EntityManagerFactory config is as follows: <bean id="myappTestLocalEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="myapp-core" /> <property name="persistenceUnitPostProcessors"> <bean class="com.myapp.core.persist.util.JtaPersistenceUnitPostProcessor"> <property name="jtaDataSource" ref="myappPersistTestJdbcDataSource" /> </bean> </property> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <property name="database" value="$DS{hibernate.database}" /> <property name="databasePlatform" value="$DS{hibernate.dialect}" /> </bean> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.factory_class">com.atomikos.icatch.jta.hibernate3.AtomikosJTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> <prop key="hibernate.connection.autocommit">false</prop> <prop key="hibernate.format_sql">true"</prop> <prop key="hibernate.use_sql_comments">true</prop> </property> </bean>

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >