Search Results

Search found 19480 results on 780 pages for 'do your own homework'.

Page 408/780 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • Apache - The name

    - by Joshua Enfield
    I am working on a migration to a newer virtualized server. The old one has Apache 2.2.4 according to the old servers phpinfo(). The new one with the most up to date has 2.2.3. How can this be assuming no trickery is involved? The old one is years old. A lot of the guides I reference use apache2 in folders names and many of the conventions. The newest version of things, as I understand it is called httpd. Did apache change the name from what it originally was? (i.e. break the web server component into its own project called httpd, I realize the original daemon was probably still called httpd)

    Read the article

  • How to create a new public AMI for windows?

    - by user67081
    I am trying to make a windows 2008 AMI that is a nice clean 64bit starter pack (IIS, SQL express, ASP.NET MVC, etc...) I would like to make it a public AMI when its done. There in lies the problem. I can make an AMI from my image no problem. But I can't seen to get new instances to generate their own passwords.. The results are that I have a new instance that works great with my password. So what is the process of making my EBS backed Instances convert into an AMI that will auto-generate its password and do all the other setup steps that amazon wants to go thru when a new instance starts up? Thanks in advance.

    Read the article

  • Windows 7 Slow Searching

    - by Guy Thomas
    I have a new Windows 7 machine with twice as much RAM and a faster processor than my old Windows Server 2008 R2 Machine. I am disappointed that searching amongst my 10,000 image files takes twice as long on my new Windows 7 machine. Both machines have their own copy of these same files. In other respects e.g. opening my huge Outlook files, the new machine is faster. The Windows Search Service has started. And I set indexing on the image folder about 3 days ago. Any ideas why I suffer from this poor index / search experience? Other than adding / removing folders, is there anything I can do to tweak indexing?

    Read the article

  • Options for connecting an external disk to an old laptop

    - by agnul
    I've tried connecting a 2.5" external drive to an old laptop which has only USB 1. The LED on the disk lights up, but the disk doesn't seem to spin up. Since the same disk works fine on a newer laptop my guess is that the old one doesn't output enough power on the USB port. Besides looking for an external drive with its own PSU, what would you suggest? Will one of those USB cables with two connectors work? What about a powered USB hub?

    Read the article

  • Increasing load capacity for growing website

    - by markxi
    My website currently runs on a dedicated web server (with LiteSpeed) and dedicated MySQL database server. It's a download based site with a lot of user-generated content, which can be streamed and downloaded, there are also thousands of thumbnails and static content. I'm at the stage where the web server can no longer handle the amount of traffic, so I'm looking a how best to increase capacity considering the large amount of downloadable content. My host suggests mirroring everything on a second web server and distributing the load between them using either DNS Made Easy, or to have my own load balancer (using ldirector) in front of the two web servers. Could anyone advise whether the above method would be the best option? Does any one have any experience with DNS Made Easy and/or ldirector? I'd appreciate any help.

    Read the article

  • Why is the use of abstractions (such as LINQ) so taboo?

    - by Matthew Patrick Cashatt
    I am an independent contractor and, as such, I interview 3-4 times a year for new gigs. I am in the midst of that cycle now and got turned down for an opportunity even though I felt like the interview went well. The same thing has happened to me a couple of times this year. Now, I am not a perfect guy and I don't expect to be a good fit for every organization. That said, my batting average is lower than usual so I politely asked my last interviewer for some constructive feedback, and he delivered! The main thing, according to the interviewer, was that I seemed to lean too much towards the use of abstractions (such as LINQ) rather than towards lower-level, organically grown algorithms. On the surface, this makes sense--in fact, it made the other rejections make sense too because I blabbed about LINQ in those interviews as well and it didn't seem that the interviewers knew much about LINQ (even though they were .NET guys). So now I am left with this question: If we are supposed to be "standing on the shoulders of giants" and using abstractions that are available to us (like LINQ), then why do some folks consider it so taboo? Doesn't it make sense to pull code "off the shelf" if it accomplishes the same goals without extra cost? It would seem to me that LINQ, even if it is an abstraction, is simply an abstraction of all the same algorithms one would write to accomplish exactly the same end. Only a performance test could tell you if your custom approach was better, but if something like LINQ met the requirements, why bother writing your own classes in the first place? I don't mean to focus on LINQ here. I am sure that the JAVA world has something comparable, I just would like to know why some folks get so uncomfortable with the idea of using an abstraction that they themselves did not write. UPDATE As Euphoric pointed out, there isn't anything comparable to LINQ in the Java world. So, if you are developing on the .NET stack, why not always try and make use of it? Is it possible that people just don't fully understand what it does?

    Read the article

  • Entity framework separating entities for product and customer specific implementation

    - by Codecat
    I am designing an application with intention into making it a product line. I would like to extend the functionality across all layers and first struggle is with domain models. For example, core functionality would have entity named Invoice with few standard fields and then customer requirements will add some new fields to it, but I don't want to add to core Invoice class. For every customer I could use customer specific DbContext and injected correct context with dependency injection. Also every customer will get they own deployment public class Product.Domain.Invoice { public int InvoiceId { get; set; } // Other fields } How to approach this problem? Solution 1 does not work since Entity Framework does not allow same simple name classes. public class CustomerA.Domain.Invoice : Product.Domain.Invoice { public User ReviewedBy { get; set; } public DateTime? ReviewedOn { get; set; } } Solution 2 Create separate table and link it to core domain table. Reusing services and controllers could be harder. public class CustomerA.Domain.CustomerAInvoice { public Product.Domain.Invoice Invoice { get; set; } public User ReviewedBy { get; set; } public DateTime? ReviewedOn { get; set; } }

    Read the article

  • How do i approach this collision model?

    - by PeeS
    this is the game level prototype i have already implemented. It has few objects per room to allow me to finally add some collision detection/response code into it. VIDEO As you can probably see, every object inside has it's own AABB, even the room itself has AABB. So a player is like 'inside the Room AABB'. My player will be exactly inside the room, so he would have to collide correctly with those AABBs, so that when he hits any of those objects inside he get's a proper collision response from those AABB's. Now i would like to hear from you what kind of collision approach should i choose in here? How do i approach this kind of stuff: AABB to AABB collision detection then when this is positive go with AABB - Tri to find proper plane normal and calculate response ? AABB to AABB then when positive go with AABB - AABB Side check to find proper proper plane normal and calculate response? Anything else? How do you do this ? Many thanks.

    Read the article

  • Views : ViewControllers, many to one, or one to one?

    - by conor
    I have developed an Android application where, typically, each view (layout.xml) displayed on the screen has it's own corresponding fragment (for the purpose of this question I may refer to this as a ViewController). These views and Fragments/ViewControllers are appropriately named to reflect what they display. So this has the effect of allowing the programmer to easily pinpoint the files associated with what they see on any given screen. The above refers to the one to one part of my question. Please note that with the above there are a few exceptions where very similar is displayed on two views so the ViewController is used for two views. (Using a simple switch (type) to determine what layout.xml file to load) On the flip side. I am currently working on the iOS version of the same app, which I didn't develop. It seems that they are adopting more of a one-to-many (ViewController:View) approach. There appears to be one ViewController that handles the display logic for many different types of views. In the ViewController are an assortment of boolean flags and arrays of data (to be displayed) that are used to determine what view to load and how to display it. This seems very cumbersome to me and coupled with no comments/ambiguous variable names I am finding it very difficult to implement changes into the project. What do you guys think of the two approaches? Which one would you prefer? I'm really considering putting in some extra time at work to refactor the iOS into a more 1:1 oriented approach. My reasoning for 1:1 over M:1 is that of modularity and legibility. After all, don't some people measure the quality of code based on how easy it is for another developer to pick up the reigns or how easy it is to pull a piece of code and use it somewhere else?

    Read the article

  • Is chroot the right choice for my use case?

    - by Anthony
    Backstory: I am working on setting up a MineCraft server and want to allow admins to have ssh access to the MineCraft server console and appropriate mc server files, but not the whole system. The console provided by the minecraft server is only available to the user that launched the process. In addition, the admins will need terminal access to some basic cli tools such as wget, cp, mv, rm, and a text editor. Plan: I have already setup the ssh aspect of things, requiring pre-shared keys and whatnot. Setup a jailed environment in which all user activity will be contained. Setup user accounts. - The first user account will be the minecraft user. The minecraft user will start the MC server in a multiuser screen session and allow the other admins to attach to it. - Subsequent users should have their own /home directory for normal usage. Setup acl for the appropriate files to allow each user to edit the mc server files. No one will be doing system updates, nor will anyone be installing any programs, so I'll be the only user with sudo. The Issues: I don't want the ssh users to have access to the whole system. Users will still need to use wget or curl to update the mc server files. Is chroot the right tool for this use case, or is there something more appropriate for the job? I have no experience setting up a chroot environment and have found several tools to aid in this process. Jailkit seems to be the most robust, but it's not in the standard repos.

    Read the article

  • Internal Server Error when using eForm with ModX

    - by Austin
    One of our clients is using modx v.0.9.6.3 with eForm. They also have their own exchange server and web hosting is done through us. We used the eForm v.1.4.4.5 to send out emails. However, they cannot get any emails from this form and when addressed to [email protected] it instantly gets to 500 Internal Server Error when the form is submitted. Note: the eForm is successfully sent when the receiver is anyone but the client. I have successfully received the email form report from a work email, and also personal email.

    Read the article

  • Embedded Tomcat Cluster

    - by ThreaT
    Can someone please explain with an example how an Embedded Tomcat Cluster works. Would a load balancer be necessary? Since we're using embedded tomcat, how would two separate jar files (each a standalone web application with their own embedded tomcat instance) know where eachother are and let eachother know their status, etc? Here is the code I have so far which is just a regular embedded tomcat without any clustering: import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.File; import java.io.IOException; import java.io.Writer; public class Main { public static void main(String[] args) throws LifecycleException, InterruptedException, ServletException { Tomcat tomcat = new Tomcat(); tomcat.setPort(8080); Context ctx = tomcat.addContext("/", new File(".").getAbsolutePath()); Tomcat.addServlet(ctx, "hello", new HttpServlet() { protected void service(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { Writer w = resp.getWriter(); w.write("Hello, World!"); w.flush(); } }); ctx.addServletMapping("/*", "hello"); tomcat.start(); tomcat.getServer().await(); } } Source: java dzone

    Read the article

  • How to represent a graph with multiple edges allowed between nodes and edges that can selectively disappear

    - by Pops
    I'm trying to figure out what sort of data structure to use for modeling some hypothetical, idealized network usage. In my scenario, a number of users who are hostile to each other are all trying to form networks of computers where all potential connections are known. The computers that one user needs to connect may not be the same as the ones another user needs to connect, though; user 1 might need to connect computers A, B and D while user 2 might need to connect computers B, C and E. Image generated with the help of NCTM Graph Creator I think the core of this is going to be an undirected cyclic graph, with nodes representing computers and edges representing Ethernet cables. However, due to the nature of the scenario, there are a few uncommon features that rule out adjacency lists and adjacency matrices (at least, without non-trivial modifications): edges can become restricted-use; that is, if one user acquires a given network connection, no other user may use that connection in the example, the green user cannot possibly connect to computer A, but the red user has connected B to E despite not having a direct link between them in some cases, a given pair of nodes will be connected by more than one edge in the example, there are two independent cables running from D to E, so the green and blue users were both able to connect those machines directly; however, red can no longer make such a connection if two computers are connected by more than one cable, each user may own no more than one of those cables I'll need to do several operations on this graph, such as: determining whether any particular pair of computers is connected for a given user identifying the optimal path for a given user to connect target computers identifying the highest-latency computer connection for a given user (i.e. longest path without branching) My first thought was to simply create a collection of all of the edges, but that's terrible for searching. The best thing I can think to do now is to modify an adjacency list so that each item in the list contains not only the edge length but also its cost and current owner. Is this a sensible approach? Assuming space is not a concern, would it be reasonable to create multiple copies of the graph (one for each user) rather than a single graph?

    Read the article

  • DNS security (hijacking?)

    - by Jongsma
    I am hosting my website on Linode and am also using their DNS/naming servers. (ns1.linode.com etc.) It occurred to me that I never have had to authenticate that the domain is mine when I added it to the domain to the DNS manager, or at any other point. I now wonder whether it would be possible for other Linode users to 'hijack' my domain by simply adding the same domain zone and pointing it to their own server. I wouldn't know how Linode could determine which are the real/authentic records. How can I be sure this doesn't happen?

    Read the article

  • Prevent auto forwarding NDR loops

    - by DemonWareXT
    a week ago we experienced a really sweet problem at a client of ours. They are a school with around 1200 users, and everyone of them has auto forwarding for all mail activated. We use Exchange 2010 Now a few of the users where able to make NDR loops by adding 2 different, wrong, destinations. We had around 80k mails sent within a few hours. Not very practical. My question is, does anyone know a good way to prevent something like this. I have found 2 ways, which both fail for their own reasons We could manage the auto forwarding on the exchange host itself, which should prevent this looping problem someone said. But 1200 Users, not on my watch. There is a Powershell script out in the wild which should work against that, but my employers want something more "professional" Thank you very much for your support Linus

    Read the article

  • Hosting and domain registrations for multiple clients under a single hosting account of mine?

    - by letseatfood
    I am finally getting regular work designing, developing, and deploying websites for small businesses and individuals. So far the websites utilize single-user content management systems, so the websites create, as far as I know, minimal load on the shared servers. I have always required that each of my clients purchase annual shared hosting at Dreamhost. For domain registration, I ask that they register with Dreamhost, but some already have a registered domain elsewhere and this is fine with me. I do this so the billing issues are the client's responsibility, not mine. My question is: Since I can register unlimited domains and connect them to my one shared hosting account at Dreamhost, should I not be requiring clients to individually pay for shared hosting and a domain? Should I actually be paying for one hosting account and then hosting all of my client's websites on that account? As I said before, I currently have each client buy their own hosting, because I feel that, for example, if there is high traffic to their site, there would be less a chance of the site going down than if their site was hosted with many others on one account. I am famous for being long-winded, please let me know if I can clarify at all. Thanks!

    Read the article

  • Set Thunderbird to always reply using plain text [closed]

    - by stefan.at.wpf
    Possible Duplicate: How do I tell Thunderbird never to send (or even try to send) html emails? In the account settings of Thunderbird (version 11.0.1) I have disabled HTML and set it to compose messages as plain text. That works for new messages. However, when I get an HTML email and reply to that mail, Thunderbird uses HTML. I went to the setting in: Tools ? Options ? Composition ? Send Options ? Plain Text Domains And have tried *.* as the domain name. I also changed the default text format in Settings - Compose - Send options. Neither of them works, the reply is still using HTML (for my own text). How can I really reply in plain text only, regardless of incoming?

    Read the article

  • Using mod_speling with multi-level htaccess and rewriterules

    - by michaelcgorman
    We recently switched formats for managing our 301s. For the most part, everything went well, but it seems to have stopped mod_speling from working properly. Here's what we changed: old /var/www/html/.htaccess: RewriteEngine on RewriteBase / # Change SHTML to HTML RewriteRule ^(.*)\.shtml$ $1.html [R=permanent,L] # Change PCF to HTML ('cause, you know, we probably have CMS users like that...) RewriteRule ^(.*)\.pcf$ $1.html [R=permanent,L] # Force WWW subdomain for all requests RewriteCond %{HTTP_HOST} !^www.example.edu$ [NC] RewriteRule ^(.*)$ http://www.example.edu/$1 [R,L] # User accounts are on sun.example.edu RedirectMatch ^/~(.*)$ http://sun.example.edu/~$1 # Remove index.html at the end of URLs RewriteCond %{REQUEST_URI} ^(.*/)index\.html$ [NC] RewriteRule . %1 [R=301,NE,L] Redirect 301 /academics/calendar2012-13.html http://www.example.edu/academics/calendar.html Redirect 301 /academics/departments/ http://www.example.edu/majors/ Redirect 301 /academics/Pre-Medical.pdf http://www.example.edu/academics/Pre-Medicine.pdf Redirect 301 ... new /var/www/html/.htaccess: RewriteEngine on RewriteBase / # Change SHTML to HTML RewriteRule ^(.*)\.shtml$ $1.html [R=permanent,L] # Change PCF to HTML ('cause, you know, we probably have CMS users like that...) RewriteRule ^(.*)\.pcf$ $1.html [R=permanent,L] # Force WWW subdomain for all requests RewriteCond %{HTTP_HOST} !^www.example.edu$ [NC] RewriteRule ^(.*)$ http://www.example.edu/$1 [R,L] # User accounts are on sun.example.edu RedirectMatch ^/~(.*)$ http://sun.example.edu/~$1 # Remove index.html at the end of URLs RewriteCond %{REQUEST_URI} ^(.*/)index\.html$ [NC] RewriteRule . %1 [R=301,NE,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*) 404/$1 And then we added a new file at /var/www/html/404/.htaccess: RewriteEngine on RewriteBase /404 RewriteRule ^academics/calendar2012-13.html$ /academics/calendar.html [R=302,L] RewriteRule ^academics/departments/$ /majors/ [R=301,L] RewriteRule ^academics/Pre-Medical.pdf$ /academics/Pre-Medicine.pdf[R=301,L] RewriteRule ... I do have (Webmin-based) access to the httpd.conf (though we don't want to store all our 301s there, if possible). We're running Apache 2.2.15 on RHEL 6 on a server in our own data center. Like I said, the only problem we're seeing is that mod_speling isn't doing its magic anymore. The new format has so many advantages over the old that we really don't want to go back, but mod_speling is so nice to have that we'd also really like it to work if possible. Any ideas for how we might be able to fix mod_speling?

    Read the article

  • Way to wake up win-xp pc, without hibernate or network?

    - by crosenblum
    I have no need to wake up a remote pc, just the pc I use at work. I want to wake it up, by itself, without having to use hibernate, which I am not comfortable with, or have ever used. Then I can have the "System Scheduler" freeware that I use, to automatically do anti-virus updates, anti-malware updates, ccleaner /auto of course... It just would make my day's easier if it was already on by the time I arrived, I usually always turn my pc off when I am done with work, or if I didn't, I have a scheduled task to auto-shutdown pc 1 hour after my normal work hours... Without using hibernate, can i get my pc to wake up on it's own at a certain time each weekday?

    Read the article

  • File storage service that allows clients to upload large files to my account?

    - by deceze
    Can anyone recommend an online file storage service which fulfills these requirements? I can create an account I can invite clients to upload files into my account clients do not need to register to be able to upload clients must not be able to see anything but their own files or they must not see any files at all, they get only a dropbox only I can access the uploaded files, everything is non-public service is multi-lingual I just need clients to be able to send me potentially large files in a dead simple manner online, that's all. No registration step to go through, no software to download, no synching or sharing. No setting up of individual folders and permissions for each individual client. No copying and pasting of links (a la Mediafire, Rapidshare etc).

    Read the article

  • IRC Services with failover support?

    - by insertjokehere
    I run a single server (call it 'server A') IRC 'network', and thank to the generosity of some friends, I have been given a second server ('server B') that I can run an IRCd on in order to provide redundancy in case server A crashes. This is fine, I can set up a round-robin DNS with the servers linked. The problem I have is what to do about services? Does anyone know of a way to get the services to 'fail over' in case of a server failure? Eg, Server A starts off running the services, but suddenly crashes. Server B detects this and starts its own copy of the services (ideally with the same configuration and data as the services on Server B) One solution that comes it mind is to write a bot that each server runs, that sit in a channel periodically checking if the bot from the other server is in the channel. If it is, then all is well. If not, then failover. I would prefer not to have to code this myself though We are currently using Unreal IRCd and Anope services on Linux

    Read the article

  • How to route traffic through a VPN tunnel?

    - by Gabriel
    The problem with our server is that we need to use the bug ridden and awful AT&T network client, which causes our server to bluescreen once per 24 hours. Does any one know how to (or has a good guide) quickly set up a workstation running Windows server 2008 R2 as a proxy server. So this spare workstation would run AT&T and would act as a bridge between our server and the server that can be connected to only via the AT&T VPN software. And this way our own production server would not crash so often (or not at all) and the workstation can happily crash whenever it wants to.

    Read the article

  • ASP.NET design not SOLID

    - by w0051977
    SOLID principles are described here: http://en.wikipedia.org/wiki/SOLID_%28object-oriented_design%29 I am developing a large ASP.NET app. The previous developer created a few very large classes each with lots of different purposes. It is very difficult to maintain and extend. The classes are deployed to the web server along with the code behind files etc. I want to share a small amount of the app with another application. I am considering moving all of the classes of the ASP.NET web app to a DLL, so the small subset of functionality can be shared. I realise it would be better to only share the classes which contain code to be shared but because of the dependencies this is proving to be very difficult e.g. class A contains code that should be shared, however class A contains references to classes B, C, D, E, F, G etc, so class A cannot be shared on its own. I am planning to refactor the code in the future. As a temporary solution I am planning to convert all the classes into a single class library. Is this a bad idea and if so, is there an alternative? as I don't have time to refactor at the moment.

    Read the article

  • Rewrite for robots.txt and favicon.ico [closed]

    - by BHare
    I have setup some rules in which subdomains (my users) will default to where I have located the robots.txt, favicon.ico, and crossdomain.xml therefore if a user creates a site say testing.mywebsite.com and they don't make their own favicon.ico at testing.mywebsite.com/favicon.ico, then it will use the favicon.ico I have in /misc/favicon.ico This works perfect, but it doesn't work for the main website. If you attempt to go to mywebsite.com/favicon.ico it will check if "/" exists, in which it does. And then never redirects to /misc/favicon.ico How can I get it so both instances redirect to /misc/favicon.ico ? # Set all crossdomain (openpalace file) favorite icons and robots.txt doesnt exist on their # side, then redirect to site's just to have something to go on. RewriteCond %{REQUEST_URI} crossdomain.xml$ RewriteCond ^(.+)crossdomain.xml !-f RewriteRule ^(.*)$ /misc/crossdomain.xml [L] RewriteCond %{REQUEST_URI} favicon.ico$ RewriteCond ^(.+)favicon.ico !-f RewriteRule ^(.*)$ /misc/favicon.ico [L] RewriteCond %{REQUEST_URI} robots.txt$ RewriteCond ^(.+)robots.txt !-f RewriteRule ^(.*)$ /misc/robots.txt [L]

    Read the article

  • Provide an OnChange event for an internal property which is controlled externally?

    - by NGLN
    For fun and by request I am updating this ImageGrid component, a kind of listbox for images that has a FileNames property of type TStrings. For ease of writing, I have been misusing its FileNames.Objects property for bitmap storage. But since the TStrings type suggests that users of the component could or would want to use the Objects property for custom data, e.g. like TListBox.Items, I am rewriting the component to store the bitmaps elsewhere and leave FileNames.Objects untouched for unknown future usage. Now I am wondering whether to provide an OnChange event. And if so, whether to fire it when one or more FileNames.Objects changes. Trying to answer it myself, I dove in Delphi's own VCL and stumbled on: TMemo: has an OnChange event, but ignores Lines.Objects TListBox: has no OnChange event, but is capable of storing Items.Objects TStringGrid: has no OnChange event, but is capable of storing Objects, Rows.Objects, Cols.Objects So now I am somewhat puzzeled, because I cannot imagine Borland's developers didn't add events for several Objects properties out of ease. Sure, when a user changes a FileNames.Object in my component, he knows he does and could implement appropriate interaction himself. But wouldn't it be convenient when the component does automatically? What would you expect from this component in this regard?

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >