Search Results

Search found 21310 results on 853 pages for 'multiple domains'.

Page 711/853 | < Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >

  • running a command line app with sudo and password automatically on start up OS X (Lion)

    - by Designer023
    I need to run an app at startup/login on my mac. I want it to launch in the background and start doing it's work without interrupting me or me having to start it up because I invariably forget and then when I need it, it wasn't running! I have tried using applescript to tell terminal to run it and type my password in, but it ends up opening multiple Terminal windows and not working. Ideally I need a script that I can just add to the user login items and it will run for me. The app has no way of taking a password argument either and it has a password as well as the sudo! I need a solution that can either be done as an applescript (which can be made into an executable) or i need a commandline script but I have no idea about them. This is the manual code I type >sudo serverStatus >password:123456 >password:serverpass Not sure if this is the right stack to ask, but I have no idea now and it's above my head! Thanks :D My applescript: tell application Terminal activate do shell script "sudo serverStatus" delay 5 do shell script "123456" delay 2 do shell script "serverpass" end tell

    Read the article

  • vi and emacs: comparison? (not flamebait!)

    - by jared
    So, I've been enjoying learning and using vi for the last couple of years. The beauty of vi, for me, is that its UI is a language of movement and action with a very uniform, simple grammar, and which is terse enough that the requisite memorization pays ample dividends in how much more I enjoy working with text (by avoiding boring repetition and eliminating micro-hassles, like that half-second annoying wait while you scroll down the screen). (Note--I don't claim to have expert knowledge of vi, but I get around decently well: comfortable with limited '@' macros and regexp search-and-replace within files; frequently use multiple buffers, tabs, and windows; get around pretty well in the file browser; understand the grammar of actions + movement + subject (as described so aptly in this beautiful SO answer); and had some pretty sweet debugger and ctags integration going with PHP.) I wonder if some emacs folks could take a swing at explaining what emacs does brilliantly, or sum its strengths up in a phrase or two. Spare me the talk about productivity; I'm more interested in conceptual clarity. Lisp-centric answers are okay; I'm learning Scheme on the weekends, and would pick up emacs for that alone (have been using Racket).

    Read the article

  • Multiboot USB (OSX only): How to customize partition name?

    - by wrk2bike
    Trying to deal with all the Mac OSX recovery disks I've got by moving them to bootable USB images. I've got a big USB drive with multiple partitions for each recovery disk, and it's easy to use Disk Utility to "restore" the recovery DVD to a partition. When I boot my target Mac while holding down the Alt key, I can see all my bootable images and they work great. Problem is, they've all got the same name: "Mac OS X Install DVD." I manage Macs of various vintages. If my target Mac needs 10.6.3 for example, my only option seems to be to try each one until I get past the "Mac OSX can't be installed on this computer" message. I originally named my partitions with the OSX revision number, but that name is replaced by the disk image name during Disk Utility restore. Is there any way to customize the name during or after Disk Utility restore? I tried making a new DVD image on disk first and renaming it, but when I restore it to my recovery partition it has the original name. EDIT: After booting to the wrong partition, and getting the "..can't be installed" message, I can open the Startup Disk menu and see the other partitions - and as I select each one, the info at the bottom indicates which OS revision is on that partition. So I know the info is in there! Just want it at the boot screen if possible.

    Read the article

  • OSX server setup suggestions

    - by Tom
    I am looking into the possibility to setup an OSX server for my employees, and would like some input on what is the best approach to meet my needs, and perhaps some suggestions if I am moving in the wrong direction. I am thinking of a Mac Mini OSX server, and are not sure if my needs will be met, and what possibilities are out there. I want these capabilities: - Groups/Users managed on server - Shared folders and private folders for users/groups - Access to activated services - Server hosting software for the users (developing tools ++) - Similar to Windows Terminal Server - Virtual desktop environment (both local and over internet/VPN) - Possible to access trough Mac and Windows The reason I am looking at OSX server is that my employees almost only work in OSX environment, and I want to offer the capabilities to logon to the server trough some kind of terminal software, and have full access to their work OSX environment and software on their mac or pc, from anywhere they might be. Instead of having to have multiple setups and need for spending alot of time installing and setting up needed software on every client. This is a small business, where some work on local network, and others from the internet, preferably trough VPN. But a terminal server solution, that are fast and easy to manage would be perfect for our needs. So if anyone have any experience with a similar setup, please let me know what you did, and your experiences with your setup.

    Read the article

  • Lenovo Thinkpad T430 not booting from HDD if there is a USB modem connected

    - by user93353
    I have a T430 Levono Thinkpad running Win7. I use a ZTE USB modem (something like this) for my internet connection. I usually keep the modem plugged into the USB drive even when the laptop is shutdown or hibernating. This worked fine on my earlier laptops. But with the Lenovo, my laptop doesn't boot if the modem is in the USB drive. It shows the initial character based screen where it gives the Thinkpad message & BIOS details and then waits. If I pull out the modem, it goes ahead. I have disabled USB as a boot option in my BIOS settings, but even then this happens sometimes (but not all the times). Likewise while resuming from hibernation. The USB modem also has drivers & ISP connection client which getting installed the first time you use it on any machine. I have used multiple laptops (HP, DELL, Acer, Gateway) but never faced this problem before. I have friends who use other Thinkpad models but haven't faced this issue. Any resolutions, workarounds for this?

    Read the article

  • Managing records of bugs and notes

    - by Jim
    Hi. I want to create a knowledgebase for a piece of software. I'd also like to be able to track bugs and common points of failure in that application. Linking knowledgebase articles to bug records would be a real boon, as would the ability to do complex queries for particular articles and bugs on the basis of tags or metadata. I've never done anything like this before, and like to install as little as possible. I've been looking at creating a wiki with Wiki On A Stick, and it seems to offer a lot. But I can't make complex queries. I can create pages that list all 'articles' with a particular single tag, but I can't specify multiple tags or filters. Is there any software that can help? I don't want to spend money until I've tried something out thoroughly, and I'd ideally like something that demands little-to-no installation. Are there any tools that can help me? If something could easily export its data, or stored data in XML, that would be a real plus too. Otherwise, are there any simple apps that allow me to set up forms for bugs, store data as XML then query and process that XML on demand? Thanks in advance.

    Read the article

  • Vim: How to join multiples lines based on a pattern?

    - by ryz
    I want to join multiple lines in a file based on a pattern that both lines share. This is my example: {101}{}{Apples} {102}{}{Eggs} {103}{}{Beans} {104}... ... {1101}{}{This is a fruit.} {1102}{}{These things are oval.} {1103}{}{You have to roast them.} {1104}... ... I want to join the lines {101}{}{Apples} and {1101}{}{This is a fruit.} to one line {101}{}{Apples}{1101}{}{This is a fruit.} for further processing. Same goes for the other lines. As you can see, both lines share the number 101, but I have no idea how to pull this off. Any Ideas? /EDIT: I found a "workaround": First, delete all preceding "{1" characters from group two in VISUAL BLOCK mode with C-V (or similar shortcut), then sort all lines by number with :%sort n, then join every second line with :let @q = "Jj" followed by 500@q. This works, but leaves me with {101}{}{Apples} 101}{}{This is a fruit.}. I would then need to add the missing characters "{1" in each line, not quite what I want. Any help appreciated.

    Read the article

  • DNAT from localhost (127.0.0.1)

    - by pts
    I'd like to set up a TCP DNAT from 127.0.0.1, port 4242 to 11.22.33.44, port 5353 on Linux 3.x (currently 3.2.52, but I can upgrade if needed). It looks like the simple DNAT rule setup doesn't work, telnet 127.0.0.1 4242 hangs for a minute in Trying 127.0.0.1..., and then it times out. Maybe it's because the kernel is discarding the returning packets (e.g. SYN+ACK), because it considers them Martian. I don't need an explanation why the simple solution doesn't work, I need a solution, even if it's complicated (e.g. it involves creating may rules). I could set up a usual DNAT from another local IP address, outside the 127.0.0.0/8 network, but now I need 127.0.0.1 as the destination address. I know that I can set up a user-level port forwarding process, but now I need a solution which can be set up using iptables and doesn't need helper processes. I was googling for this for an hour. It was asked multiple times, but I couldn't find any working solutions. Also there are many questions about DNAT to 127.0.0.1, but I don't need that, I need the opposite.

    Read the article

  • How to securely control access to a backend key server?

    - by andy
    I need to securely encrypt data in my database so that if the database is dumped, hackers are unable to decrypt the data. I'm planning on creating a simple key server on a different machine, and allowing the DB server access to it (restricted by IP address on the key server to permit the DB server). The key server would contain the key required to encrypt/decrypt data. However, if a hacker were able to get a shell on the DB server, they could request the key from the key server and therefore decrypt the data in the database. How could I prevent this (assuming all firewalls are in place, DB is not connected directly to the internet, etc)? i.e. is there some method I could use that could secure a request from the DB server to the key server so that even if a hacker had a shell on the DB server they'd be unable to make those same requests? Signed requests from the DB server could make issuing these requests less trivial - I suppose that'd help increase the amount of time it'd take to compromise the key server, something a hacker probably wouldn't have much of. As far as I can see, if someone can get a shell on the DB server everything's lost anyway. This could be mitigated by using one key per data item in the DB so at least there's not a single "master" key, but multiple keys that the hacker would need to access. What would be a secure method of ensuring requests from the DB server to the key server were authentic and could be trusted?

    Read the article

  • IIS 7: launch unique site instance per host name

    - by OlduwanSteve
    Is it possible to configure IIS 7 so that a single site with multiple bindings (or wildcard bindings) will launch a unique instance for each unique host name? To explain why this is desirable, we have an application that retrieves its configuration from a remote system. The behaviour of the application is governed by this configuration and not by the 'web.config'. The application uses its host name as a key to retrieve the configuration. Currently it is a manual process to create an identical IIS site for each instance of the application, differing only by the bindings. My thought, if it were possible, is that it would be nice to have one IIS site that effectively works as a template for an arbitrary number of dynamic sites. Whenever it is accessed by a unique host name a new instance of the site would be launched, and all further requests to that host name would go to that instance just as though I had created the site by hand. I use IIS regularly, but only for fairly straightforward site hosting. I'd like to know if this could be configured with vanilla IIS 7, but would also welcome answers that require a plugin or 3rd party product. Programming/architectural suggestions about changes to the app wouldn't really be appropriate for serverfault.

    Read the article

  • What is the alternative of Apache's global Alias in IIS? (e.g. Alias /phpMyAdmin "c:/AppServ/www/phpMyAdmin")

    - by Sk8erPeter
    I know there's an "Add Virtual Directory..." option in every given sites in IIS with which I can set up e.g. phpMyAdmin's path to be reached with prepending /phpmyadmin to the address (e.g. http://example.com/phpmyadmin), but isn't there a "global" setting similar to Apache's Alias? For example, in Apache this setting looks like this: <IfModule mod_alias.c> Alias /phpMyAdmin "c:/AppServ/www/phpMyAdmin" Alias /phpmyadmin "c:/AppServ/www/phpMyAdmin" </IfModule> This way I reach phpmyadmin with every hosts. (http://example1.com/phpmyadmin, http://example2.com/phpmyadmin also does work) But in IIS, do I have to add a virtual directory to every sites? I'm just curious, because we would like to serve some domain's content, so there would be multiple sites. It would be more comfortable to do it once (or have the opportunity to remove it once), but if I have to, I do add a virtual directory for each sites. (I know, maybe it's the better solution, because I can have a site where I don't want phpmyadmin to be available, but I was just curious.) Thanks in advance!

    Read the article

  • WebSphere hung threads, how can I track then down?

    - by Puzzled
    We have an application running on WebSphere (unfortunately it is 6.1 which is no longer supported, it has not yet been migrated in production to a later version) which becomes entirely unresponsive because of hung threads. As far as I can tell we entirely exhaust one of the thread pools. I have activated hung thread detection and I get a core/thread dump when hung threads are detected. The server can run for several days without problems but has crashed twice this week. When load the core/thread dump in "IBM Thread and Monitor Dump Analyzer for Java", it tells me that there are a certain number of hung threads (this time it was 2, last time 11) and multiple (usually around 40) threads "waiting on condition" and some running threads. I believe one of the thread pool has around that size (50). Now what I see in there are threads waiting for locks, having locks or in wait. Most of them show a stack track which always ends like this: at java/lang/Object.wait(Native Method) at java/lang/Object.wait(Object.java:231) Now, how can I track this down to either a server configuration problem, application issue, WebSphere problem or something else? How is this supposed to help me track down the problem when almost everything in there refers to IBM code? I cannot ask IBM's help as 6.1 is now an unsupported version of WebSphere and while work has been done to make it work under WebSphere 7 we are not yet ready to switch to it in Production yet.

    Read the article

  • Will the removal of NAT (with the use of IPv6) be bad for consumers? [closed]

    - by Jonathan.
    Possible Duplicate: How will IPv6 impact everyday users? (World IPv6 Day) As I understand when we have finally made the switch to IPv6 not only will NAT be unnecessary but it is incompatible with IPv6? Will that mean that ISPs will have to serve multiple IP addresses per customer? Will they provide a range of addresses for each customer or as each device connects will they get an IP address that isn't necessarily near that of the other devices in their house? But overall will this be bad for the Internet users? as surely it will allow ISPs to see exactly how many devices are being used, and so allow them to charge for the use of additional IP addresses? And then if that happens, what happens when you try to connect an extra device to your network? Will it simply not get an IP address? In my home we have about 15-20 devices connected at once, but for places where there are hundreds of devices, it seems like the perfect opportunity for ISPs to charge more? I think I may have it completely wrong, so is there somewhere where there is an explanation of who things will work when IPv6 becomes the norm?

    Read the article

  • Mail server DNS failed to resolved by Mac clients

    - by Concordus Applications
    We have two internal DNS servers. One is located on a linux server box and the other is the router's DNS management. We set the linux box as primary DNS via DHCP and the router as secondary. We have a few Mac clients that are accessing our internal mail server (hostnamed "mail" internally). When using IMAP or SMTP against the mail server internally, the mac boxes will sometimes fail to locate the server. If I use NSLOOKUP I can see that "mail" is pointed to the correct IP address and is being resolved via the correct DNS server, but if I ping "mail" it fails. ~ (bash)$ nslookup mail Server: 254.254.254.206 Address: 254.254.254.206#53 Name: mail.example.com Address: 254.254.254.205 Note: I replaced our actual internal IP address with 254.254.254.* If I wait a few minutes (3-5 minutes), somehow it resolves itself and sends successfully. This happens multiple times a day. The /etc/hosts file on the mac boxes is the default config. ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost Is there something about Mac clients I should know to prevent this failed DNS resolution? Client boxes are: OSX 10.7.4, 8GB RAM, i5 MacBooks Server is: Ubuntu 12.04 Server

    Read the article

  • what is best multi-server configuration with OpenVPN

    - by sebut
    We have a number of Database severs running MongoDB on Debian plus a number of Application servers also on Debian. The db servers hold replicating db clusters, so they need to talk to each other. Application servers need to talk to all db servers (for reasons of fault tolerance). The servers are potentially spread across multiple hosting centers, so we need secure channels between all servers. The number of servers is bound to grow, so we need a VPN solution that's easy to maintain and expand. This is why I feel that SSH that we use for testing might not be up to the task and OpenVPN seems the way to go. I have ruled out TAP, since I understand that this would mean all traffic going to all the servers - perhaps this is a misunderstanding and TAP acts more like a switch? With TUN devices I imagine that all DB servers would live in their own separate subnet, they would also need a client configured to be able to connect to each of their peers. The application servers could live in a common subnet range with a client config only. Does this sound like a reasonable setup? Strangely, on the web I did not find anything about multi-server with OpenVPN. Thanks for all insights and ideas!

    Read the article

  • Friendly Intranet Addresses

    - by Jmyster
    Relativly new to IIS. I'm attempting to set up multiple sites in my Intranet on one server. The server already has SharePoint Installed on it and has a binding *:80. So when I type //ServerName I get the home page of SharePoint. I get how that works. I set up a new site in IIS and set the Binding to *:30015. On a remote machine if I type //ServerName:30015 in a web browser, I get the new site. Awesome, working as intended. My Questions: Can/How do i set it up so that I can type //DivisionAppName or //Division.AppName and have it resolve itself to //ServerName:30015? Is this something I have to register with my Company's DNS server? I hope not, getting my corprate IT to assist is a nightmare. What I tried: I have added Bindings with the Host Name filled in with both DivisionAppName or Division.AppName and port 30015 but that doesn't seem to work.

    Read the article

  • Renaming VLAN Interfaces in Linux

    - by rhololkeolke
    I need to know how to rename VLAN interfaces. I'm currently running Ubuntu 11.04. I'm running a networking application that takes frames on one interface applies things like delays and errors and then forwards the frames out another interface. The default naming convention which names things <interface>.<vlan> e.g. eth0.2 will not work for my purposes because the program which parses the configuration script for the networking application doesn't like the decimal in the interface name. I ran vconfig set_name_type VLAN_PLUS_VID which solves the decimal in the interface name problem, however, I can then no longer assign the same vlan id to multiple interfaces because they have the same name. I know how to change physical interface names using udev rules, but because the vlan's will have the same MAC address and they aren't physical interfaces I can't use those rules to rename the interfaces. Is there a way to rename any interface in linux, including the virtual ones? Is there a way to specify your own naming convention for config set_name_type option without having to recompile the source of vconfig?

    Read the article

  • Computer Turns on Briefly then right back off again.

    - by goddamnyouryan
    So yesterday I came home from work and went to turn my computer on....it turned on for about 5 seconds then promptly turned right back off again...before I ever saw anything on the screen. I tried again, same result. After several attempts, I've found that the length at which it turns on differs. After trying multiple times in a row, it only stays on for about 3 seconds. If I let it rest for a bit it sometimes will stay on for up to a minute (though it never boots, the screen stays black the whole time). I'm not sure what is causing this issue...I built this computer a little more than 2 years ago and this is the first issue I have ever had with it. I did all the usual checks: -It's not the power switch -The capacitors on the motherboard all seem to be in working order -The PSU seems to be fine as it lights up, fan spins, and will sometimes stay on for about a minute period My hope is that the thermal paste on the cpu has degraded and just needs to be re-applied. Does that seem like a reasonable assumption? I'm going to tear the thing apart and do a minimum system build when I get home, but any heads up as to what I should be looking for would be much appreciated. Any thoughts?

    Read the article

  • how to throttle http requests on a linux machine?

    - by hooraygradschool
    EDIT: here is the summery: i need to reduce max connections preferably system wide on Ubuntu 11.04 but at least within Google Chrome. i do not need or want to throttle bandwidth, Verizon seems to only care about the number of connections so that is all i want to change. also, i don't want to use firefox unless i have to, i have three other machines all using chrome and synced and i just prefer it over firefox. i use tethering for my home internet connection via my verizon cell phone. without paying for it. this works just fine for streaming netflix via my nintendo wii and pretty much every other conceivable use ive had for it. except, during heavy usage with multiple tabs open on my laptop, the network connection on my phone will just turn off, then on again, then off, but it never fully connects. i think, based on this and other questions that this is caused by verizon getting too many http requests from my phone. is there some software, script, setting or otherwise that would allow me to throttle my requests to say, 5 or 10 or whatever it turns out is 1 less than verizon is looking for, so that my cell's network connection is not lost? i would far prefer a slow down rather than complete shut off of my internet connection. i am almost certain is from quantity of requests and not related to data, because, as i mentioned, netflix will run all day without a hitch, and that uses more data than anything else i would be doing. if i had a router i am pretty sure there are settings i could easily change to only allow so many requests at a time ... but in this case, my phone is my router, so no settings. im using ubuntu 11.04 on my netbook with an htc incredible on verizon (not that the phone details are relevant) i have been trying to figure this out for quite some time, currently the only fix is ensure that all requests are stopped and then sometimes it works again, other times i have to manually turn my 3g service off and then back on. thank you so much for any assistance!

    Read the article

  • Multi-petabyte scale out storage solution [closed]

    - by Alex Yuriev
    Let's say that I have a need to have a single-name space scale to multi-petabyte object store with a file system-like wrapper. What is currently out there that supports the following: Single name space that can take 1B files. Support for multiple entry points using NFS At least node level replication ( preferably node and file level replication ) Online software upgrades No "magic sauce" on the storage layer The following has been evaluated: Gluster & Lustre - just ick - fundamental lack of understanding of why online upgrades are mandatory. OneFS - we have it. It is smelling more and more like it hides a dead body under the hood. Other than MapR and zfs am I missing anything? P.S. Oh yes, I keep forgetting that the forums are for people to discuss if 2TB drive actually stores 2TB info. May bad. Seriously though - how the heck can "meets the following requirements" can be considered a "debate"? P.P.S. I did not throw an idiotic insult - i pointed out that this is actually an interesting question compared to a conversation about storage capacity of a 2TB hard drive. It is not a question of what works better - it is a question that asks did I miss any of the products that currently exist which fit the criteria where criteria is clearly outline. I got one answer below which included something that I have not looked at in a long time which looks quite a bit grown up compared to the time I briefly look at it before.

    Read the article

  • Open source app to manage and run commands on cloud servers? [closed]

    - by Mark Theunissen
    I'm creating a SaaS platform, and I need a component / library that can create, delete and store the connection details for cloud servers. It also needs to support executing shell commands on these servers and returning the response to the caller. I want a central database of servers and their configuration, plus the ability to reach out and manage the servers via SSH execution of bash scripts. I don't want something that needs agents on every server like Chef. For example, this command is received by the hypothetical application: CREATE USER server = server12345 name = myuser It's translated into the following set of actions and executed by the app, which knows how to connect to server12345, and how to create a user on that server: $ ssh root@server12345 $ adduser myuser And returns the output from the shell: Added user myuser. I've done research on Google and can't quite quite find something that does this already. I've found: fabric This part handles the executing of the shell commands very elegantly, and can take multiple server definitions, but it's supposed to be a deployment tool so doesn't do everything that would be required above - for example, it doesn't have a daemon mode where it listens for commands - it expects to be executed on the shell. It also can't provide the central database functionality. libcloud This library can handle the server admin (CRUD) part, but doesn't have a command interface daemon either, and doesn't let you execute commands on the servers. I guess I need something that is a combination of libcloud, fabric and django for an API. Or something else that does that same thing regardless of language. Overmind Overmind is a GUI and wrapper around libcloud, but doesn't support the command execution part. What am I missing here?

    Read the article

  • Private subnet for VM server host-only network

    - by Derek Pressnall
    At my current job, we distribute a product based on a Linux server with multiple VMs defined (using KVM / libvirt). We are planning to expose limited ports to the customer's network, and use iptables to direct inbound traffic to the appropriate internal VM. My question: is there a class of private subnets that I can use for the internal host-only network that is least likely to conflict with a client IP subnet? Specifically, if I choose a /24 out of any of the RFC-1918 defined private subnets (such as 192.168.x.x), there is a chance of conflicting with a customer-used range. I noticed that several current VM implementations default to 192.168.122.x -- is this due to an RFC that I'm not familiar with, and therefore this is a safe range to use (that most network admins would avoid)? Or did the various VM vendors just pick that range randomly? I guess I'm looking for an IP range that is more private than the existing private (RFC1918) addresses. The only other thought I had was to use one of the "Test Net" IP ranges reserved for documentation purposes (RFC 5737). Note, that I'm not worried about a customer's network blocking these IPs, as this is only internal to our server (packets get NATted before leaving the box). However this does seem more unorthodox than just sticking with the default 192.168.122.x/24 subnet.

    Read the article

  • Creating a multi-tenant application using PostgreSQL's schemas and Rails

    - by ramon.tayag
    Stuff I've already figured out I'm learning how to create a multi-tenant application in Rails that serves data from different schemas based on what domain or subdomain is used to view the application. I already have a few concerns answered: How can you get subdomain-fu to work with domains as well? Here's someone that asked the same question which leads you to this blog. What database, and how will it be structured? Here's an excellent talk by Guy Naor, and good question about PostgreSQL and schemas. I already know my schemas will all have the same structure. They will differ in the data they hold. So, how can you run migrations for all schemas? Here's an answer. Those three points cover a lot of the general stuff I need to know. However, in the next steps I seem to have many ways of implementing things. I'm hoping that there's a better, easier way. Finally, to my question When a new user signs up, I can easily create the schema. However, what would be the best and easiest way to load the structure that the rest of the schemas already have? Here are some questions/scenarios that might give you a better idea. Should I pass it on to a shell script that dumps the public schema into a temporary one, and imports it back to my main database (pretty much like what Guy Naor says in his video)? Here's a quick summary/script I got from the helpful #postgres on freenode. While this will probably work, I'm gonna have to do a lot of stuff outside of Rails, which makes me a bit uncomfortable.. which also brings me to the next question. Is there a way to do this straight from Ruby on Rails? Like create a PostgreSQL schema, then just load the Rails database schema (schema.rb - I know, it's confusing) into that PostgreSQL schema. Is there a gem/plugin that has these things already? Methods like "create_pg_schema_and_load_rails_schema(the_new_schema_name)". If there's none, I'll probably work at making one, but I'm doubtful about how well tested it'll be with all the moving parts (especially if I end up using a shell script to create and manage new PostgreSQL schemas). Thanks, and I hope that wasn't too long! UPDATE May 11, 2010 11:26 GMT+8 Since last night I've been able to get a method to work that creates a new schema and loads schema.rb into it. Not sure if what I'm doing is correct (seems to work fine, so far) but it's a step closer at least. If there's a better way please let me know. module SchemaUtils def self.add_schema_to_path(schema) conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{schema}, #{conn.schema_search_path}" end def self.reset_search_path conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{conn.schema_search_path}" end def self.create_and_migrate_schema(schema_name) conn = ActiveRecord::Base.connection schemas = conn.select_values("select * from pg_namespace where nspname != 'information_schema' AND nspname NOT LIKE 'pg%'") if schemas.include?(schema_name) tables = conn.tables Rails.logger.info "#{schema_name} exists already with these tables #{tables.inspect}" else Rails.logger.info "About to create #{schema_name}" conn.execute "create schema #{schema_name}" end # Save the old search path so we can set it back at the end of this method old_search_path = conn.schema_search_path # Tried to set the search path like in the methods above (from Guy Naor) # conn.execute "SET search_path TO #{schema_name}" # But the connection itself seems to remember the old search path. # If set this way, it works. conn.schema_search_path = schema_name # Directly from databases.rake. # In Rails 2.3.5 databases.rake can be found in railties/lib/tasks/databases.rake file = "#{Rails.root}/db/schema.rb" if File.exists?(file) Rails.logger.info "About to load the schema #{file}" load(file) else abort %{#{file} doesn't exist yet. It's possible that you just ran a migration!} end Rails.logger.info "About to set search path back to #{old_search_path}." conn.schema_search_path = old_search_path end end

    Read the article

  • Rendering a WPF Network Map/Graph layout - Manual? PathListBox? Something Else?

    - by Ben Von Handorf
    I'm writing code to present the user with a simplified network map. At any given time, the map is focused on a specific item... say a router or a server. Based on the focused item, other network entities are grouped into sets (i.e. subnets or domains) and then rendered around the focused item. Lines would represent connections and groups would be visually grouped inside a rectangle or ellipse. Panning and zooming are required features. An item can be selected to display more information in a "properties" style window. An item could also be double-clicked to re-focus the entire network map on that item. At that point, the entire map would be re-calculated. I am using MVVM without any framework, as of yet. Assume the logic for grouping items and determining what should be shown or not is all in place. I'm looking for the best way to approach the UI layout. So far, I'm aware of the following options: Use a canvas for layout (inside a ScrollViewer to handle the panning). Have my ViewModel make use of a Layout Manager type of class, which would handle assigning all the layout properties (Top, Left, etc.). Bind my set of display items to an ItemsControl and use Data Templates to handle the actual rendering. The drawbacks with this approach: Highly manual layout on my part. Lots of calculation. I have to handle item selection manually. Computation of connecting lines is manual. The Pros of this approach: I can draw additional lines between child subnets as appropriate (manually). Additional LayoutManagers could be added later to render the display differently. This could probably be wrapped up into some sort of a GraphLayout control to be re-used. Present the focused item at the center of the display and then use a PathListBox for layout of the additional items. Have my ViewModel expose a simple list of things to be drawn and bind them to the PathListBox. Override the ListBoxItem Template to also create a line geometry from the borders of the focused item (tricky) to the bound item. Use DataTemplates to handle the case where the item being bound is a subnet, in which case we would use another PathListBox in the template to display items inside the subnet. The drawbacks with this approach: Selected Item synchronization across multiple `PathListBox`es. Only one item on the whole graph can be selected at a time, but each child PathListBox maintains its own selection. Also, subnets cannot be selected, but would be selectable without additional work. Drawing the connecting lines is going to be a bit of trickery in the ListBoxItem template, since I need to know the correct side of the focused item to connect to. The pros of this approach: I get to stay out of the layout business, more. I'm looking for any advice or thoughts from others who have encountered similar issues or who have more WPF experience than I. I'm using WPF 4, so any new tricks are legal and encouraged.

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

< Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >