Search Results

Search found 26618 results on 1065 pages for 'amazon instance store'.

Page 704/1065 | < Previous Page | 700 701 702 703 704 705 706 707 708 709 710 711  | Next Page >

  • Agressive Auto-Updating?

    - by MattiasK
    What do you guys think is best practice regarding auto-updating? Google Chrome for instance seems to auto-update itself as soon as it get's a chance without asking and I'm fine with it. I think most "normal" users benefits from updates being a transparent process. Then again, some more technical users might be miffed if you update their app without permission, as I see it there's 3 options: 1) Have a checkbox when installing that says "allow automatic updates" 2) Just have a preference somewhere that allows you to "disable automatic updates" so that you have to "check for updates manually" I'm leaning towards 2) because 1) feels like it might alienate non-technical users and I'd rather avoid installation queries if possible. Also I'm thinking about making it easy to downgrade if an upgrade (heaven forbid) causes trouble, what are your thoughts? Another question, even if auto-updates are automatically, perhaps they should be announced. If there's new features for example otherwise you might not realize and use them One thing that kinda scares me though is the security implications, someone could theorically hack my server and push out spyware/zombieware to all my customers. It seems that using digital signatures to prevent man-in-the-middle attacks is the least you could do otherwise you might be hooked up to a network that spoofs the address of of update server.

    Read the article

  • In developing a soap client proxy, which return structure is easier to use and more sensible?

    - by cori
    I'm writing (in PHP) a client/proxy for a SOAP web service. The return types are consistently wrapped in response objects that contain the return values. In many cases this make a lot of sense - for instance when multiple values are being returned: GetDetailsResponse Object ( Results Object ( [TotalResults] => 10 [NextPage] => 2 ) [Details] => Array ( [0] => Detail Object ( [Id] => 1 ) ) ) But some of the methods return a single scalar value or a single object or array wrapped in a response object: GetThingummyIdResponse Object ( [ThingummyId] => 42 ) In some cases these objects might be pretty deep, so getting at properties within requires drilling down several layers: $response->Details->Detail[0]->Contents->Item[5]->Id And if I unwrap them before passing them back I can strip out a layer from consumers' code. I know I'm probably being a little bit of an Architecture Astronaut here, but the latter style really bug me, so I've been working through my code to have my proxy methods just return the scalar value to the client code where there's no absolute need for a wrapper object. My question is, am I actually making things more difficult for the consumers of my code? Would I be better off just leaving the return values wrapped in response objects so that everything is consistent, or is removing unneccessary layers of indirection/abstraction worthwhile?

    Read the article

  • How can I tell which laptop touch-screens work well with a stylus (for drawing/taking notes)?

    - by BlueRaja
    I'm looking for a laptop with a touch-screen and stylus for drawing/note-taking. I've read the difference between the different kinds of styluses, but that's only half the story - what about the touch-screen? How do I know if the touch-screen supports "palm-rejection"? Or if the included stylus is a capacitive stylus or a "Wacom digitizer"? Or if the screen will even support Wacom? How can I tell how accurate the touch-screen is (from my testing, some definitely seem to have higher "resolution" than others)? Is there anything else I should be looking at? I don't see any of this information on, for instance, the Newegg specs page for a laptop.

    Read the article

  • Best way to setup hosts, subdomains, and IPs [closed]

    - by LynnOwens
    I own a domain, let's call it mydomain.com. I need to host the following off it: forums.mydomain.com www.mydomain.com blog.mydomain.com objects.mydomain.com I believe I can get 5 static IPs. I plan on assigning one each to those four hosts. Then I need to adhoc create names, all below objects.mydomain.com. For instance: one.objects.mydomain.com two.objects.mydomain.com three.objects.mydomain.com I need to create these names programatically, and without human intervention. Preferably, they would not get their own IPs. They would use the IP of objects.mydomain.com. First question: Does this mean that I need to host my own DNS? Second question: I'm using Apache as a web server. What does the virtual host configuration look like? I was experimenting with the following to understand how routing on domain names works and I always ended up at www. <VirtualHost *:80> ServerAdmin [email protected] ServerName www.mydomain.com ServerAlias www.mydomain.com DocumentRoot "E:/Static/www" RewriteEngine On RewriteRule ^(/www/.*) /www$1 </Virtualhost> <VirtualHost *:80> ServerAdmin [email protected] ServerName forums.mydomain.com ServerAlias forums.mydomain.com DocumentRoot "E:/Static/forums" RewriteEngine On RewriteRule ^(/forums/.*) /forums$1 </Virtualhost>

    Read the article

  • Is there an alternative to the term "calling object"?

    - by ybakos
    Let's suppose you've got a class defined (in pseudocode): class Puppy { // ... string sound = "Rawr!"; void bark() { print(sound); } } And say, given a Puppy instance, you call it's bark() method: Puppy p; p.bark(); Notice how bark() uses the member variable sound. In many contexts, I've seen folks describe sound as the member variable of the "calling object." My question is, what's a better term to use than "calling object?" To me, the object is not doing any calling. We know that member functions are in a way just functions with an implicit this or self parameter. I've come up with "receiving object," or "message recipient," which makes sense if you're down with the "messaging" paradigm. Do any of you happy hackers have a term that you like to use? I feel it should mean "the object upon which a method is called" and TOUWAMIC just doesn't cut it.

    Read the article

  • Facebook contest policy no-no?

    - by Fred
    I would like to post a link on a Facebook page where it will exit Facebook entirely and go to a client's website, where people will be on a page (client's) where they can enter their e-mail address to be entered in a temporary database file with rules and disclosures etc., for a draw once the number of entries reaches 100 for instance. Once the number of entries reaches 100, a random winner is picked and notified via E-mail. The functionality is as follows: A link is place on a Facebook page leading to an external page The page is a form to merely enter their email address for a contest The email is placed in a temporary file An automatic E-mail is sent to the address used for confirmation using SHAH-256 hash The person receives the Email saying something to the affect "Please confirm your Email address etc. - If you did not authorize this, simply ignore this message and no further action will be taken". If the person clicks on the confirmation link, the Email is then stored in the database and the person is again notified saying "Thank you for signing up etc." Once others do the same process and the database reaches a certain number, the form is no longer accessible and automatically picks a random Email. Once picked, an Email is automatically sent to the winner stating the instructions, and notifying me also. Once that person clicks yet another confirmation link, the database is then automatically deleted. I have built this myself and have no intentions of breaking any rules, nor jeopardize the work/time/energy I have put into this project. Is this allowed?

    Read the article

  • How to leverage the internal HTTP endpoint available on Azure web roles?

    - by adelsors
    Imagine you have a Web application using an in-memory collection that changes occasionally, loading it from storage on the Application_Start global.asax event and updating it whenever it changes. If you want to deploy this application on Azure you need to keep in mind that more than one instance of the application can be running at any time and therefore you need to provide some mechanism to keep all instances informed with the latest changes. Because that the communication through internal endpoints between Azure role instances is at no cost, a good solution can be maintaining the information on Azure Storage Tables, reading its contents on the Application_Start event and populating its changes to all instances using the internal HTTP port available on Azure Web Roles. You need to follow these steps to leverage the internal HTTP endpoint available on Azure web roles: 1.   Define an internal HTTP endpoint in the Web Role properties, for example InternalHttpEndpoint   2.   Add a new WCF service to the Web Role, for example NotificationServices.svc 3.   Add a method on the new service to receive notifications from other role instances. 4.   Declare a class that inherits from System.ServiceModel.Activation.ServiceHostFactory and override the method CreateServiceHost to host the internal endpoint.   Note that you can use SecurityMode.None because the internal endpoint is private to the instances of the service, this is provided by the platform. 5.   Edit the markup of the service right clicking the svc file and selecting "View markup" to add the new factory as the factory to be used to create the service    6. Now you can notify changes to other instances using this code:

    Read the article

  • How do I debug an upstart job?

    - by Cerales
    I have the following job in /etc/init/collector: start on runlevel [2345] stop on runlevel [!2345] expect daemon exec /usr/bin/twistd -y /path/to/my/tac/file When I start the job with sudo service collector start, it hangs. If I ctrl-c and run initctl list, I see this: collector start/killed, process 616 I can't see an instance of the twistd daemon in ps, and the HTTP server it's supposed to be providing does not exist. I even tried this without 'expect daemon' and with a simple call to a one-line bash script using a script stanza, and it still doesn't work. I think I'm doing something very wrong. What could it be?

    Read the article

  • Help with DB Structure, vOD site

    - by Chud37
    I have a video on demand style site that hosts series of videos under different modules. However with the way I have designed the database it is proving to be very slow. I have asked this question before and someone suggested indexing, but i cannot seem to get my head around it. But I would like someone to help with the structure of the database here to see if it can be improved. The core table is Videos: ID bigint(20) (primary key, auto-increment) pID text airdate text title text subject mediumtext url mediumtext mID int(11) vID int(11) sID int(11) pID is a unique 5 digit string to each video that is a shorthand identifier. Airdate is the TS, (stored in text format, right there maybe I should change that to TIMESTAMP AUTO UPDATE), title is self explanatory, subject is self explanatory, url is the hard link on the site to the video, mID is joined to another table for the module title, vID is joined to another table for the language of the video, (english, russian, etc) and sID is the summary for the module, a paragraph stored in an external database. The slowest part of the website is the logging part of it. I store the data in another table called 'Hits': id mediumint(10) (primary key, auto-increment) progID text ts int(10) Again, here (this was all made a while ago) but my Timestamp (ts) is an INT instead of ON UPDATE CURRENT TIMESTAMP, which I guess it should be. However This table is now 47,492 rows long and the script that I wrote to process it is very very slow, so slow in fact that it times out. A row is added to this table each time a user clicks 'Play' on the website and then so the progID is the same as the pID, and it logs the php time() timestamp in ts. Basically I load the entire database of 'Hits' into an array and count the hits in each day using the TS column. I am guessing (i'm quite slow at all this, but I had no idea this would happen when I built the thing) that this is possibly the worst way to go about this. So my questions are as follows: Is there a better way of structuring the 'Videos' table, is so, what do you suggest? Is there a better way of structuring 'hits', if so, please help/tell me! Or is it the fact that my tables are fine and the PHP coding is crappy?

    Read the article

  • Ways to deduplicate files

    - by User1
    I want to simply backup and archive the files on several machines. Unfortunately, the files have some large files that are the same file but stored differently on different machines. For instance, there may a few hundred photos that were copied from one computer to the other as an ad-hoc backup. Now that I want to make a common repository of files, I don't want several copies of the same photo. If I copy all of these files to a single directory, is there a tool that can go thru and recognize duplicate files and give me a list or even delete one of the duplicates?

    Read the article

  • Why am I getting [mount error(22): Invalid argument] while trying to mount SMB network drive?

    - by Steve_
    Disclaimer: I am very new to Linux :) Anyway, onward: I have a fresh instance of Ubuntu Server (12.04.1 LTS) running on my network and I want to mount a network drive to the server so I can access the contents. The network drive is a SAMBA compatible drive running Darwin OS. If I run the following command: smbclient -L //192.168.0.2 -U myuser It prompts me for the password and then displays output similar to: Domain=[SERVER01] OS=[Darwin] Server=[@(#)PROGRAM:smbd PROJECT:smbx-105.4.0] Sharename Type Comment --------- ---- ------- Comp Staff's Public Folder Disk CompRaid03 Disk Dropbox Disk Groups Disk IPC$ IPC Public Disk Users Disk compstaff Disk However, when I try and mount the CompRaid03 share, using this command: sudo mount -t cifs //192.168.0.2/CompRaid03 /mnt/myshare -o username=myuser I get the same password prompt, but after putting the correct password in, I received this error: mount error(22): Invalid argument dmesg | tail returns: [23576.037373] CIFS VFS: cifs_mount failed w/return code = -22 I don't understand what is wrong with this command. I've managed to mount a share on my current (Windows 8) machine using basically the same command but with a different IP address and share name (obviously). I've spent a good few hours trying to solve this and got no where. Any help or pointers would be greatly appreciated. Thanks Steve EDIT As suggested I've also trued using "user=" instead of "username=": sudo mount -t cifs //192.168.0.2/CompRaid03 /mnt/svnrepo -o user=myuser This results in the same "Invalid argument" error.

    Read the article

  • How to structure reading of commands given at a(n interactive) CLI prompt?

    - by Anto
    Let's say I have a program called theprogram (the marketing team was on strike when the product was to be named). I start that program by typing, perhaps not surprisingly, the program name as a command into a command prompt. After that, I get into a loop (from the users standpoint, an interactive command-line prompt), where one command will be read from the user, and depending on what command was given, the program will execute some instructions. I have been doing something like the following (in C-like pseudocode): main_loop{ in=read_input(); if(in=="command 1") do_something(); else if(in=="command 2") do_something_else(); ... } (In a real program, I would probably encapsulate more things into different procedures, this is just an example.) This works well for a small amount of commands, but let's say you have 100, 1000 or even 10 000 of them (the manual would be huge!). It is clearly a bad idea to have 10 000 ifs and else ifs after each other, for instance, the program would be hard to read, hard to maintain, contain a lot of boilerplate code... Yeah, you don't want to do that, so what approach would you recommend me to use (I will probably never use 10 000 commands in a program, but the solution should, at least preferably, be able to scale to that kind of massive (?) problems. The solution doesn't have to allow for arguments to the commands)?

    Read the article

  • Storing a pass-by-reference parameter as a pointer - Bad practice?

    - by Karl Nicoll
    I recently came across the following pattern in an API I've been forced to use: class SomeObject { public: // Constructor. SomeObject(bool copy = false); // Set a value. void SetValue(const ComplexType &value); private: bool m_copy; ComplexType *m_pComplexType; ComplexType m_complexType; }; // ------------------------------------------------------------ SomeObject::SomeObject(bool copy) : m_copy(copy) { } // ------------------------------------------------------------ void SomeObject::SetValue(const ComplexType &value) { if (m_copy) m_complexType.assign(value); else m_pComplexType = const_cast<ComplexType *>(&value); } The background behind this pattern is that it is used to hold data prior to it being encoded and sent to a TCP socket. The copy weirdness is designed to make the class SomeObject efficient by only holding a pointer to the object until it needs to be encoded, but also provide the option to copy values if the lifetime of the SomeObject exceeds the lifetime of a ComplexType. However, consider the following: SomeObject SomeFunction() { ComplexType complexTypeInstance(1); // Create an instance of ComplexType. SomeObject encodeHelper; encodeHelper.SetValue(complexTypeInstance); // Okay. return encodeHelper; // Uh oh! complexTypeInstance has been destroyed, and // now encoding will venture into the realm of undefined // behaviour! } I tripped over this because I used the default constructor, and this resulted in messages being encoded as blank (through a fluke of undefined behaviour). It took an absolute age to pinpoint the cause! Anyway, is this a standard pattern for something like this? Are there any advantages to doing it this way vs overloading the SetValue method to accept a pointer that I'm missing? Thanks!

    Read the article

  • Is there a feature in Nagios that allows Memory between checks?

    - by Kyle Brandt
    There are various instances where there are values I want to monitor with Nagios, and I don't care as much about the value itself, but rather how it compares to the previous value. For instance, I wrote one to check the fail counters in OpenVZ. In this case, I didn't care about the value that much, but rather I cared if the value increased. Another example might be switch ports, I would be most interested to get alerted about the change of state of a port (Although perhaps a trap would be better for this one). For my OpenVZ script, I used a temp file, but I am wondering if there is a better way? Maybe Nagios has some variables that plugins (check scripts) can access that are persistent across checks?

    Read the article

  • Installing software from source

    - by Learning
    I'm trying to understand the rational behind installing from software from source in Ubuntu 12.04. Obviously, I know I can download what I need from repos, but I want to develop a deeper understanding of Linux. As a Windows users, when I download a program I double click it, and it installs into the program files directory unless I specify otherwise. When I want to uninstall it, Windows has a tools that does that for me. When I install a program in Linux, where does it install to? How do I uninstall it afterwards? Are there residual files left over? How would I tell if it's been fully removed? For instance, I'm going to install LMMS (Linux Multimedia Studio) from source. I download and decompress the tar ball, and have a folder name lmms_XXX.tar (whatever). The folder is now on my desktop with an install file. If I run the install file from that location, does it install into that folder? If so, can I move that folder to where ever I want? I want thinking about putting it in /opt/lmms

    Read the article

  • SQL Server Configuration Scripting Utility Release 9

    - by Bill Graziano
    There’s another update to my little utility to script a SQL Server’s configuration.  I use this for two purposes.  First, I use it to keep my database mirroring servers up to date.  Second, I capture the output in a version control system and keep that for historical reference. In release 3.0.9 I made the following changes: Rewrote the encrypted trigger scripting.  It will now list the encrypted triggers in a comment in the table script but can’t actually script them. It now scripts any server event notifications. You can script a single database using the /scriptdb flag.  Please note that it will also script the instance and system databases when it does this. It will script any user-defined endpoints.  This will capture your mirroring endpoints and more importantly any service broker endpoints. It will gracefully skip database mail on the Express Edition. It still doesn’t support SQL Server 2012.  I think that’s the next feature to add though.

    Read the article

  • what poor management decsions have you had to deal with?

    - by tombull89
    As a junior technician I've had to deal with (or will have to deal with) some problems in the past and only being a junior technicain I don't have the confidence or respect from management staff to speak up. For instance, we're having a entirely new system. From Server 2003/XP going to Sevrer 2008 R2/Win7/VMWare/Digital Signage and the current amount of time dedicated to the training of the IT support department currently stands at 0. They seem to think that all it systems are the same and are going to get a bit of shock when I can't help them. I think there;s some UK legislation saying a school/business have to put money and time aside for training, but I'm not sure. What have you had to deal with?

    Read the article

  • Unable to sunchronize local and remote directories ("set times: Operation not permitted")

    - by Tom Auger
    I'm running into FTP errors using software like NetBeans or WinSCP: whenever I attempt to perform a synchronization or update of files from local -- server I get errors on the client saying "set times: Operation not permitted". This is clearly an issue with the way I've configured my Fedora installation. The user that I'm logging in with cannot touch -t any of these files, though he IS part of a group that has r/w access on the files. I do have root / sudo access to this server. What I would like to know is: a) is it likely that this problem would be solved by allowing my FTP user to "touch -t" these files b) how do I enable a certain user to be able to set timestamps on files without giving them ownership of the files (certain of these files need to be owned by Apache, for instance, so I don't want to chown them). Thanks in advance.

    Read the article

  • Load balanced proxies to avoid an API request limit

    - by ClickClickClick
    There is a certain API out there which limits the number of requests per day per IP. My plan is to create a bunch of EC2 instances with elastic IPs to sidestep the limitation. I'm familiar with EC2 and am just interested in the configuration of the proxies and a software load balancer. I think I want to run a simple TCP Proxy on each instance and a software load balancer on the machine I will be requesting from. Something that allows the following to return a response from a different IP (round robin, availability, doesn't really matter..) eg. curl http://www.bbc.co.uk -x http://myproxyloadbalancer:port Could anyone recommend a combination of software or even a link to an article that details a pleasing way to pull it off? (My client won't be curl but is proxy aware.. I'll be making the requests from a Ruby script..)

    Read the article

  • Issue with corrupt 1tb volume: can we rescue SQL databases?

    - by Barrie Ashmore
    We have an instance where some how a 1tb volume has become corrupted. There are Event ID 55 errors in the system event viewer. When running chkdsk, it gets to 47% verifying indexes and returns to the command prompt. If we are unable to rescue the data on the drive then we have a full backup of the databases from some time ago. The logfiles of which are on a different volume which is intact. Would it be viable to restore the databases and somehow replay the log files? I have seen articles about replaying logfiles from a logfile backup (.trn), however we have the actual logfiles (.ldf) Any help would be appreciated.

    Read the article

  • Multi-site Drupal install with sites on different ports using Apache ip-based hosting?

    - by MattB
    In the past we've used name-based virtual hosting in Apache. We recently converted websites to SSL and had to go the ip-based route. As a result, we currently have an instance that is set up as follows: www.domain.com using port 80 dev.domain.com using port 8080 Both use the same IP. Is this scenario possible using Drupal multi-site functionality? While we find that dev.domain.com works and reads the correct "dev" database (using the dev settings), it reads theme files from the "www" site instead which is not what we want. Is the culprit the dev's htaccess file? Apache is listening on 8080 and does use the proper DB settings, but just not the correct theme files. One other note: browsing dev.domain.com:8080 gives an error: "The page isn't redirecting properly". Should we just purchase a new IP address for the dev website, or would this still not help? Any advice would be appreciated. Thanks.

    Read the article

  • Fix X11 forwarding on OSX

    - by Such
    I am looking for a way to fix/debug a X11 forwarding session on OSX. Here is my situation: From my mac I connect to a Ubuntu workstation with ssh -X (tried ssh -Y as well). X11 forwarding works perfectly with firefox for instance, X11/Quartz is started automatically on OSX and firefox is displayed. X11 forwarding does not work with bat (Bacula graphical console): X11 is started but no window is displayed. There are no errors (/private/var/log/system.log). When I try doing the same from another Ubuntu workstation, it works perfectly for both firefox and bat. I guess the problem is on OSX side then. I tried switching some options in X11 but nothing works. Would you have any idea on how to move forward? Thanks!

    Read the article

  • What causes Remote Desktop Services Manager to crash in Server 2008 R2?

    - by milkmood
    I have this consistent problem of RDSM crashing in Server 2008 R2. It is either really slow to open, sometimes never opens, or after it's been open and working properly for a bit, stops working, and forces an unload of the snap-in. It's done this since the deployment of this server, new hardware, new instance of S2k8. Domain Administrator login. I am using it to manage 3 Terminal Servers, the other two are S2k3. I've used it without issues on other 2008 servers.

    Read the article

  • Can I use static routing to allow me to use my public IP from my LAN?

    - by jnm2
    I would like to be able to use the same hostname to connect to my computer from my phone whether I'm at home or away. Currently I have to maintain duplicate entries for remote desktop, for instance. My router doesn't seem to have a NAT loopback option. I have two routers in fact, a cable modem which goes straight to my main router which does wireless. I can add to the static routing tables on each. Can I use this to loopback the public IP or do I need different routers?

    Read the article

  • share code between check and process methods

    - by undu
    My job is to refactor an old library for GIS vector data processing. The main class encapsulates a collection of building outlines, and offers different methods for checking data consistency. Those checking functions have an optional parameter that allows to perform some process. For instance: std::vector<Point> checkIntersections(int process_mode = 0); This method tests if some building outlines are intersecting, and return the intersection points. But if you pass a non null argument, the method will modify the outlines to remove the intersection. I think it's pretty bad (at call site, a reader not familiar with the code base will assume that a method called checkSomething only performs a check and doesn't modifiy data) and I want to change this. I also want to avoid code duplication as check and process methods are mostly similar. So I was thinking to something like this: // a private worker std::vector<Point> workerIntersections(int process_mode = 0) { // it's the equivalent of the current checkIntersections, it may perform // a process depending on process_mode } // public interfaces for check and process std::vector<Point> checkIntersections() /* const */ { workerIntersections(0); } std::vector<Point> processIntersections(int process_mode /*I have different process modes*/) { workerIntersections(process_mode); } But that forces me to break const correctness as workerIntersections is a non-const method. How can I separate check and process, avoiding code duplication and keeping const-correctness?

    Read the article

< Previous Page | 700 701 702 703 704 705 706 707 708 709 710 711  | Next Page >