Search Results

Search found 24266 results on 971 pages for 'api design'.

Page 848/971 | < Previous Page | 844 845 846 847 848 849 850 851 852 853 854 855  | Next Page >

  • sys.dm_exec_query_stats interaction with recompilation

    - by Sam Saffron
    We use sys.dm_exec_query_stats to track down slow queries and queries that are IO offenders. This works great, we get a lot of very insightful stats. It is clear this is not as accurate as running a profiler trace, as you have no idea when SQL Server will decide to chuck out a an execution plan. We have quite a few queries where the wrong execution plan is cached. For example queries like the following: SELECT TOP 30 a.Id FROM Posts a JOIN Posts q ON q.Id = a.ParentId JOIN PostTags pt ON q.Id = pt.PostId WHERE a.PostTypeId = 2 AND a.DeletionDate IS NULL AND a.CommunityOwnedDate IS NULL AND a.CreationDate @date AND LEN(a.Body) 300 AND pt.Tag = @tag AND a.Score 0 ORDER BY a.Score DESC The problem is that the ideal plan really depends on the date selected (screenshot of ideal plan): However if the wrong plan is cached, it totally chokes when the date range is big: (notice the big fat lines) To overcome this we were recommended to use either OPTION (OPTIMIZE FOR UNKNOWN) or OPTION (RECOMPILE) OPTIMIZE FOR UNKNOWN results in a slightly better plan, which is far from optimal. Executions are tracked in sys.dm_exec_query_stats. RECOMPILE results in the best plan being chosen, however no execution counts and stats are tracked in sys.dm_exec_query_stats. Is there another DMV we could use to track stats on queries with OPTION (RECOMPILE)? Is this behavior by-design? Is there another way we can for recompilation while keeping stats tracked in sys.dm_exec_query_stats? Note: the framework will always execute parameterized queries using sp_executesql

    Read the article

  • Understanding Microsoft Word Formatting Behavior. Does anyone?

    - by deemer
    This isn't a question about how to do something (well, not directly), but rather an inquiry to see if anyone understands why MS Word behaves the way it does with respect to formatting from a design perspective. This is also admittedly a rant about Word. This is a question that has plagued me, well, every time I open a document in Word, and covers a lot of individual topics, so I'll restrict the discussion here to two concrete behaviors that baffle me. 1) Backspacing over whitespace changes the format of preceding text. This seems to most often occur when the preceding text is a header or list number. The strangest thing about this is the new format of the changed text usually doesn't appear anywhere else in the document. 2) Numbered lists count almost at random. I am working on a document today where the list numbers count as follows: 1, 2, 2, 3, 3. The lettering in the sublists go like this 1: a, 2: a, 2: b c d, 3: e f g, 3: a. Clicking on each number or letter highlights the other numbers or letters that Word thinks it is related to, which are scattered around the document pretty heavily. Attempts to renumber the list have so far proven fruitless, as Word seems to maintain these associations through clipboard copies, etc. Why could this even happen?

    Read the article

  • pfSense routing between two routers with shared network

    - by JohnCC
    I have a network set-up using two pfSense routers arranged like this:- DMZ1 WAN1 WAN2 DMZ2 | | | | | | | | \___ PF1 PF2___/ | | | | \___TRUSTED___/ Each pfSense router has its own separate WAN connection, and a separate DMZ network attached to it. They share a common TRUSTED LAN between them. The machines on the trusted network have PF1 as their default gateway. PF1 has a static route defined to DMZ2 via PF2, and PF2 has a static route to DMZ1 via PF1. There is NAT to the WAN but internal networks (DMZ1/2 and TRUSTED) use different RFC1918 subnets. I inherited this arrangement, and all used to work fine. I made a config change to PF1 (relating to multicast), and machines on DMZ2 suddenly could not talk to TRUSTED. I rolled the change back, but the problem persisted. What I guess you'd hope would happen is that TCP packets would go DMZ2 - PF2 - TRUSTED and on return TRUSTED - PF1 - PF2 - DMZ2. That's the only way I can see it would have worked. However, PF1 drops the returning packets. I've verified this using tcpdump. I've worked around this by adding static routes to DMZ2 via PF2 to the servers on TRUSTED, but some devices on there do not support static routes so this is not ideal. Is there way to make this arrangement work decently, or is the design inherently flawed? Thanks!

    Read the article

  • Is it possible to trace someone using Google during an online exam?

    - by George
    I happen to be a professor at a reputed college. I want to design an online exam for over 1000 students via around 50 computers right after the vacation ends. Now the problem is that I have heard that many students use Google on a different tab to find answers when no invigilator is around. I want to know if there is a way to backtrace it after the exams via some kind of history or any other possible way. In our university there is a standard system. I am not good with computers but I will try to explain. Each computer uses mozilla to connect to a server centrally located via an IP. The students open it and enter a unique ID and password to start the exams. Many questions are jumbled and different groups of students give exam in a different time slot. Is there any way to trace it since I want to set an example for students so they won't cheat and give exams in an honest way. Additional details: Since the number of computers are less than the number of students, more than 10 students are going to use a single computer on a single day over a period of 10 hours. After this, if I check the history (and let's say someone even forgot to delete the history and I see it), will I able to figure out who among the 10 has done it? Moreover, is it even practical and feasible?

    Read the article

  • Change to different user, or let different user execute a command

    - by WG-
    I have a problem. There is a server which I can access with an account by ssh, lets say WG. Now there is a folder with the following permissions. drwxr-s---+ 855 vvz www-data 20K Aug 21 17:56 pictures I want to copy this folder using rsync, however since I am not the user www-data but WG I cannot execute rsync. So I want www-data to execute a rsync command. However, I do not posses sudo powers. My friend however tells me that I am actually able to execute the rsync command as www-data, but he will not tell me how. I asked him for some clues and he told me that it had something to do with reverse shell (which I figured out to be that you connect by ssh to your server and then you connect back to your own server, or something). I also asked if it was by-design or actually a flaw in the system. He tells me it is both. Furthermore I think it has something to do with the group permissions. If I just make sure that I am with the group permissions then I can also read the files. Anybody has a clue?

    Read the article

  • Create custom launchers in GNOME 3

    - by hochl
    I'm using Debian testing, and I have been switched to GNOME 3 by the Debian update yesterday. I'm not very comfortable with the UI. I wanted to customize everything like I had it with GNOME 2, but I simply couldn't find any way to change preferences like I'm used to. I've digged some, but all answers I could find did not help me achieve my goals. So please, if anyone knows the solution to this I'd be thankful: 1) I want several launchers that launch terminals, with different arguments and different coloring/title. I have searched everything and there seems to be no menu, no right-click, nothing which is standard in any UI I know. How can I create several launchers in this bar on the left side that launch the same application, just with different parameters? With GNOME 2 this was a piece of cake. 2) I want to switch between different terminals using ALT-TAB. Right now, I'm always just getting to the same, already-opened terminal. When I open two terminals by simply creating the second one by issuing xterm &, I still get one Terminal entry with ALT-TAB, and I have to navigate with cursor keys or mouse wheel to select one of the two xterminals. Instead, I want to open a new terminal when I click the quick launch terminal icon from the bar on the left side of the screen and navigate through them like on KDE/GNOME 2/Windows/any reasonable UI. Can this be done? 3) Is there a trick to make bluetooth devices work like on GNOME 2? Right now, my BT keyboard won't pair anymore, which, as you can imagine, makes me pretty angry. and, if anything fails: 4) How can I switch back to GNOME 2 again? :-) Honestly, who did design this? What were they smoking? I feel like I'm not allowed to do anything except start one of any application that has an icon and just with the default parameters. That can't be true, right? I feel massively restrained by this stuff :(

    Read the article

  • Lotus Notes 8.0.2 - how to stop all mail showing in customized view

    - by mikolajek
    I am using Lotus Notes 8.0.2 at work and unfortunately the admin restricted changing default folders design. Only little changes are possible (e.g. change columns order) and even them are resetted each time I restart the client. I've created a new view with my desired column order, changed sorting etc. I have only one problem - even though I changed the "view" preference to show messages from the inbox folder only, I keep seeing all mail, regardless of the folder they are placed into. I'm not a Lotus expert and don't really know how to code. Yet, I am surprised as I see in a "simple view" this: uses '(ChangeMeetingType), ...' form AND In folder 'Inbox' And in Formula view only this: SELECT ((Form = "(ChangeMeetingType)") | (Form = "(Return Receipt)") | (Form = "Return Receipt") | (Form = "(ReturnNonReceipt)") | (Form = "ReturnNonReceipt") | (Form = "Memo") | (Form = "Memo") | (Form = "MemoEA") | (Form = "Reply") | (Form = "Reply") | (Form = "Reply With History") | (Form = "Reply With History") | (Form = "To Do") | (Form = "Task") | (Form = "_Document Memo") | (Form = "$DocMemo") | (Form = "Word. Document$Word Memo") | (Form = "WordPro. Document$Word Pro Memo") | (Form = "AlternateMemo")) Therefore, it looks like no folder has been really selected. How can I create a solution to see: Inbox contents only? Just messages, invitations and other "normal" stuff - without calendar entries and contacts?

    Read the article

  • Industrial strength cloud file storage

    - by ArthurG
    I'm looking for an industrial strength cloud file storage system. It will be used by multiple people in a startup. Our requirements: Transparent file system access: files and folders in the file system must be able transparently access (read and write) files in the cloud; files must be synchronized whenever network access is available and buffered otherwise. The system must be usable by non-technical people. Access control: we need to control who can access which files, at least on a very coarse basis. e.g., the developers will be able to access the system design documents, only the corporate folks can access recruiting documents, and only management can access certain corporate documents. Dropbox provides this via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. so the cloud service should have a notion of an account (our startup) with multiple users with distinct credentials and rights for each user Clients: it must be accessible from Macs and PCs; I would hope that it supports Linux (e.g., Ubuntu) too Security: it must provide robust security Backup: the cloud service must reliably backup the files Versioning: change version history, is a big plus, but not required Not free: we're willing to pay for the service So far, we've reviewed the following, albeit not completely thoroughly: Dropbox: has all except 1) Access control, which is provided via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. and 2) Security, as discussed here http://www.economist.com/blogs/babbage/2011/05/internet_security and here http://blog.dropbox.com/?p=821. Windows Live Mesh, has all except 1) Clients, only supporting Windows 7 and OS X. SpiderOak has all, except 1) Transparent file system access, which is only available for 1 user. Amazon Cloud, doesn't offer 1) Transparent file system access Rackspace Cloud Drive has all except 1) Access control and 2) Versioning I'll gladly include any clarifications or additional systems the community provides. Arthur

    Read the article

  • Pulling application updates from closest server?

    - by Mike Morris
    Setup: 6 Major Sites with Server 2003/2008 DCs doing DHCP/AD Integrated DNS, each on their own subnet. All connect back to datacenter through a 3 mbps WAN ERP server running in the datacenter, accessed by clients at all sites Currently, when we update the software, I manually push a copy of the updated client/config files down to each DC. I have a script that we run on each PC to update the clients. It determines what subnet the PC is on, and pulls the software from that DC. It's messy, but it works. The client has an autoupdate feature, but it'll only pull from the application server (which is housed in the datacenter, over the 3 meg link). It takes forever, since the updates are not "patches" but a full version of the client, even for minor upgrades (bad design). After the most recent patch, you can configure the clients to pull from a different server. Unfortunately, it is the same for all clients. Is there some kind of DNS magic I can use to pull from the local server? For instance, if I tell the clients their update server is ERPUPDATE, can I have their local DNS server return a different IP for ERPUPDATE than the other sites? Example: Client 1 is at site A, client 2 is at site b. They each run the software and a version change is detected. As per the config files, the clients look to ERPUPDATE for their updated client. Client 1 queries DNS for the IP of ERPUPDATE at its current location (site A) DNS at site A returns 192.1.1.5 Client 1 pulls update from 192.1.1.5 Client 2 queries DNS for the IP of ERPUPDATE at its current location (site B) DNS at site B returns 192.1.2.5 Client 2 pulls update from 192.1.2.5 Excuse the poor explanation, I worked 61 hours over the weekend and haven't completely rebounded. I'll be happy to clarify if needed!

    Read the article

  • SNMP query - operation not permitted

    - by jperovic
    I am working on API that reads a lot of data via SNMP (routes, interfaces, QoS policies, etc...). Lately, I have experienced a random error stating: Operation not permitted Now, I use SNMP4J as core library and cannot really pinpoint the source of error. Some Stackoverflow questions have suggested OS being unable to open sufficient number of file handles but increasing that parameter did not help much. The strange thing is that error occurs only when iptables is up and running. Could it be that firewall is blocking some traffic? I have tried writing JUnit test that mimicked application's logic but no errors were fired... Any help would be appreciated! Thanks! IPTABLES *nat :PREROUTING ACCEPT [2:96] :POSTROUTING ACCEPT [68:4218] :OUTPUT ACCEPT [68:4218] # route redirect za SNMP Trap i syslog -A PREROUTING -i eth0 -p udp -m udp --dport 514 -j REDIRECT --to-ports 33514 -A PREROUTING -i eth0 -p udp -m udp --dport 162 -j REDIRECT --to-ports 33162 COMMIT *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT ..... # SNMP -A INPUT -p udp -m state --state NEW -m udp --dport 161 -j ACCEPT # SNMP trap -A INPUT -p udp -m state --state NEW -m udp --dport 162 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 33162 -j ACCEPT ..... -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT

    Read the article

  • Relevance and Necessity of SNMP

    - by Adam Tannon
    Edit: I am in the process of designing a Java-based monitoring tool that will send back periodic "health checks" of a Java app deployed to a cluster of GlassFish servers. I am trying to figure out the best protocol for this monitoring tool to send information back to the monitoring server on. After an initial research effort on my part, it seems like SNMP is just a protocol for monitor-type applications to communicate the "health status" of something (a part of a network, a server, a cluster, an application, etc.) to the rest of the network. If the above is incorrect, please correct me!!! Assuming the generalization is more or less accurate, my next question is: why is this a protocol!?!? In the age of REST/SOAP/TCP protocols, why is there the need for a standardized protocol that only fits one type of application (monitoring)? In other words, if I'm a developer assigned to building a new monitoring tool that periodically polls a server and reports on its CPU and available memory, what advantages does SNMP give me over just POSTing to a RESTful API via plain 'ole HTTP? I'm sure I'm missing something here - I just need someone to help connect the dots! Thanks in advance!

    Read the article

  • Why domain.com appears as theplant.com when hosted on hostgator?

    - by silow
    I have a script that's supposed to detect the url of its caller website. If the caller is another website, it should give something like http://callersite.com. I'm using this line of php code (though I suspect this won't matter for sysadmins) gethostbyaddr($_SERVER['REMOTE_ADDR']) I'm testing with a caller site that's hosted on hostgator. What I'm noticing though is that I don't get callersite.com, I get something like 1a.12.12ab.static.theplanet.com. I don't know what theplanet.com is and why I'm not getting caller site.com. Also what do I need to do to really get the domain of the site making a call to my script? -- Thanks for the explanation. Some have advised I use $_SERVER['HTTP_REFERER'] but it's not what I'm after. My script acts as an API. Another website makes a curl request to it and gets an output and later on presents it to the user. So http referrer gives false since the caller site.com is making a direct call to me. So any hope?

    Read the article

  • Is it possible to create a simple frontend indexer for openbittorent torrents?

    - by SimonK
    I run a website which distributes a few files every now and again, live music performances by a rock band. I create a torrent file, set the trackers as openbittorrent, publicbt and other similar open trackers. I upload the torrent file to my forum, my users download it and the files are shared. No problems there. What I would like to do is index those torrents properly on my website though so I can follow seeders/leechers and other stats online. I know the open torrent trackers don't have an index but I am aware of many, many indexing sites that do that exact thing. I don't know how though. So what I'm asking is what do I need to do to do that myself? I simply want to create a page that lists the torrents I and other users on my site create, the seeders/leechers ratio and a link to the torrent file etc. What data do I need to be able to do that? I'm proficient in general web design but I don't know what I would need data wise to pull the required info on the torrents? Thanks

    Read the article

  • Linux Distro - GUI similar to Windows

    - by DeaconDesperado
    I am in the process of refurbing several older laptop machines for use by a couple college guys we have in training to learn basic web development in python. These are students who intern at my company and are hoping to do some work when the summer comes building simple client-oriented webapps (learning the basics of OOP, MVC webapp design in flask, etc.). We're trying to function as the "practical" side of their education. I would like to get them set up on these machines we have sitting about, but I'd like to use a linux distro that would have a gui that closely approximates what they are being compelled to use at school (windows.) I don't really have much of a preference as far as GUI goes since much of what we'll be learning together is accomplished on the command line. I just see this as an easier adjustment for them while they are still reliant on a graphical environment. In the past I'd go straight for Ubuntu, but since they started using the Unity GUI the responsiveness overall can be pretty clunky on older machines, especially since these machines (there are four of them) run the gambit on specs (though all are at least 1.0Ghz and none have anything better than basic integrated video.) Has anyone had to setup a similar working environment in Mint, bare Debian or Zorin? Thanks.

    Read the article

  • lighttpd on Fedora permission issues

    - by Isaac Gateno
    I'm trying to get started with lighttpd on Fedora 16 to run a RESTful api for development. Right now even with the most basic sample config file I'm getting 404 pages when I know the pages I'm pointing at exist. From reading other questions I'm leaning towards this being a permissions issue, but I'm confused about how lighttpd runs on Fedora. There's a user called "lighttpd" not "www-data"? I can't see this user in the system-config-users tool and I can't su into it to check which permissions it has. I'm trying to point lighttpd to "/var/www/lighttpd" which has some example pages in it. The permissions for the files inside are set to -rw-r--r-- and the permissions for the folder containing them are drwxr-xr-x. Doesn't that mean that any user can view these files? I'm not sure what else I should be checking as I don't have much experience with server configuration. Any help would be appreciated. Edit: I was following the tutorial configuration here so the lighttpd.conf file contains server.document-root = "/var/www/lighttpd/" server.port = 3000 mimetype.assign = ( ".html" => "text/html", ".txt" => "text/plain", ".jpg" => "image/jpeg", ".png" => "image/png" ) and I was just trying to get the basic example page working.

    Read the article

  • mysql - moving to a lower performance server, how small can I go?

    - by pedalpete
    I've been running a site for a few years now which really isn't growing in traffic, and I want to save some money on hosting, but keep it going for the loyal users of the site and api. The database has one a nearly 4 million row table, and on a 4gb dual xeon 5320 server. When I check server stats on this server with ps -aux, i get returns of mysql running at about 11% capacity, so no serious load. The main query against mysql runs in about 0.45 seconds. I popped over to linode.com to see what kind of performance I could get out of one of their tiny boxes, and their 360mb ram XEN vps returns the same query in 20 seconds. Clearly not good enough. I've looked at the mysql variables, and they are both very similar (I've included the show variables output below, if anybody is interested). Is there a good way to decide on what size server is needed based on what I'm coming from? Is it RAM that is likely making the difference with the large table size? Is there a way for me to figure out how much ram would be ideal?? Here's the output of the show variables (though I'm not sure it is important). +---------------------------------+------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/ | | bdb_cache_size | 8384512 | | bdb_home | /var/lib/mysql/ | | bdb_log_buffer_size | 262144 | | bdb_logdir | | | bdb_max_lock | 10000 | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + - For some reason, that table formats properly in the preview, but apparently not when viewing the question. Hopefully it isn't needed anyway.

    Read the article

  • How to know who accessed a file or if a file has 'access' monitor in linux

    - by J L
    I'm a noob and have some questions about viewing who accessed a file. I found there are ways to see if a file was accessed (not modified/changed) through audit subsystem and inotify. However, from what I have read online, according to here: http://www.cyberciti.biz/tips/linux-audit-files-to-see-who-made-changes-to-a-file.html it says to 'watch/monitor' file, I have to set a watch by using command like: # auditctl -w /etc/passwd -p war -k password-file So if I create a new file or directory, do I have to use audit/inotify command to 'set' watch first to 'watch' who accessed the new file? Also is there a way to know if a directory is being 'watched' through audit subsystem or inotify? How/where can I check the log of a file? edit: from further googling, I found this page saying: http://www.kernel.org/doc/man-pages/online/pages/man7/inotify.7.html The inotify API provides no information about the user or process that triggered the inotify event. So I guess this means that I cant figure out which user accessed a file? Only audit subsystem can be used to figure out who accessed a file?

    Read the article

  • powershell vs GPO for installation, configuration, maintenance

    - by user52874
    My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!

    Read the article

  • Understanding where an amazon ec2 instance run?

    - by kenzo450D
    I am currently using the aws api from my local desktop. I can successfully take backups of my amazon volumes, and even create an ami from it. Now when i wanted to run the instance to be built from this ami, where does the instance run? In their Elastic Cloud or the computer from which the command was issued. Suppose I want to create the new instance in a new region? (locations as defined in ec2-describe-regions) How would I do that? It seems i have a bad knowledge about how the relation between amazon volumes and instances? Please explain it. I am only allowed to use the CLI tools to do all of my work. I made a new snapshot of the existing instance, made an ami using ec2-register, made a keypair, and then followed these steps, http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-an-instance.html#launching-an-instance-cli but i got an error as this Client.InvalidParameterValue: The requested instance type's architecture (i386) does not match the architecture in the manifest for aki-fc37bacc (x86_64) my local computer is 32bit. But I do not want to load instance on the local computer but on amazon servers?

    Read the article

  • Windows 7 Laptop Randomly Sleeping

    - by Josh Arenberg
    I'm running Windows 7 64-bit on a Lenovo laptop, and I have a problem wherein after I've put it to 'sleep' once, it will seemingly without any reason or warning go back to sleep every 10 to 15 minutes after that. Each time it wakes back up just fine, and you can happily continue on. However, 10 or 15 minutes later, it just drops out into sleep mode. I don't see any errors in the system logs. I don't think it's related to heat ( since it's not rebooting ). When I check the system event log, I can see where the machine goes to sleep, but it simply says "reason: application API", and doesn't indicate which app ( brilliant ). I don't see any errors from hardware or anything relating to sleep in the application log that would point to what is going on either. How can I find out which app is triggering this? EDIT: I confirmed that temperature wasn't the issue. I think it's important to keep in mind that this condition only happens after the first sleep. If I reboot, the problem goes away until I put it to sleep, after which it sleeps on it's own every few minutes. Is there no way to capture which app is calling the sleep routine?

    Read the article

  • Output current with Teensy++ 2.0 (arduino-based hardware)

    - by omtinez
    I am working on a project with a Teensy++ 2.0 for testing, eventually the goal is to use a Teensy 2.0 (info on both available here) and mount it onto a robot R/C car along with a Raspberry Pi. I have been able to use and test one of the very cheap distance sensors that use ultrasound, which requires very little current. I was trying to power a motor, I don't know exactly what kind of motor but I assume a very low-power one which is what comes with the R/C car cheapo, but nothing is happening. When I plug the motor to GROUND and +5V it runs fine, but when I use GROUND and one of the GPIO pins then nothing happens with the motor. The same GPIO pins were tested to successfully power and run the ultrasound sensor, so the board is fine. My suspicion is that the GPIO pins don't output enough current to power the motor, but my knowledge of electronics is rather scarce (I am a computer scientist, not an electrical engineer). So please forgive me if I am asking something obvious or plain stupid, but does the board not have enough power to power the motor? If so, I could try to use a second power supply that would go straight into the motor and use the GPIO as a gate to turn that power on and off; would such thing work? Is there a better design that could be used?

    Read the article

  • Problems in Table of Contents formatting

    - by ChrisW
    Two questions about captions in Word (they are related, hence the same post): Using Word 2010 (and its inbuilt equation editor) I've got figure captions which contain equations (well, actually, they represent chemical equations, such as nitrate, for which the correct representation is NO3- where the 3 is subscript and the - is superscript, but in the same column). However, when I generate a figure list, the equation displays as NO3- (with no subscript or superscript) - Word knows it's an equation though (the Equation Tools design ribbon/tab is displayed when I click on the NO3-). I've tried changing it from Professional to Linear and similar other obvious options, but still can't get it to display correctly. File to show this problem in action: http://dl.dropbox.com/u/101867759/EqtnTest.docx - note how the (chemical) equation for nitrate is rendered correctly in the 'caption' on Page 2, but not in the ToC on page 1. I have another caption where the whole figure is included in my list of figures. When I double click on the caption in my text, the caption is highlighted (as expected), but so is the figure (this doesn't happen with any of my other figures) so I assume that the figure has been 'linked' in some way to the text - how do I remove this link?

    Read the article

  • Automating Access 2007 Queries (changing one criteria)

    - by Graphth
    So, I have 6 queries and I want to run them all once at the end of each month. (I know a bit about SQL but they're simply built using Access's design view). So, in the next few days, perhaps I'll run the 6 queries for May, as May just ended. I only want the data from the month that just ended, so the query has Criteria set as the name of the month (e.g., May). Now, it's not hugely time consuming to change all of these each month, but is there some way to automate this? Currently, they're all set to April and I want to change them all to May when I run them in a few days. And each month, I'd like to type the month (perhaps in a textbox in a form or somewhere else if you know a better way) just once and have it change all 6 queries, without having to manually open all 6, scroll over to the right field and change the Criteria. Note (about VBA): I have used Excel VBA so I know the basics of VBA but I don't really know anything specific to Access (other than seeing code a few times). And, others will use this who do not know anything about Access VBA. So, I think I have found a similar question/answer that could do this in VBA, but I'd rather do it some other way. If the query needs to be slightly redesigned later, probably by someone who doesn't know Access VBA at all, it'd be nice to have a solution not involving VBA if that is even possible.

    Read the article

  • Find out what fonts are being sent to a printer

    - by user38307
    I have an issue where two computers running XP and with identical print drivers have different behavior printing over parallel port to receipt printers. For one type of receipt, receipt printing is instant. For another kind printing is delayed by ten seconds on most machines but not on the other. This happens even if I swap out printers. I believe the delay is because this computer has a different set of fonts installed. (It is used for graphic design.) The printers have built-in fonts, and if you do not use one of the built-in fonts the printer has to build up an image in memory rather than just spitting out its fonts. For a particular kind of receipt with special fonts on a particular computer the computer is sending a font which the receipt printer does not have built in. My question is, is there a way to find out what fonts are being sent to the printer? This would let me narrow down what I need to modify in the Windows font folder. Thank you!

    Read the article

  • Bing Desktop not updating the wallpaper anymore

    - by warmth
    For some reason, first my workstation and then my tablet stopped updating the wallpaper. First I thought it was my company that was avoiding the app to work properly but then I started noticing that the app itself is a mess: It has two storage and formats for the wallpapers: C:\Users\<username>\AppData\Local\Microsoft\BingDesktop\en-US\Apps\Wallpaper_5386c77076d04cf9a8b5d619b4cba48e\VersionIndependent\images with a #####.jpg (single number) image format & C:\Users\<username>\AppData\Local\Microsoft\BingDesktop\themes with a ####-##-##.jpg (date) image format. I read here that deleting the themes folder it will get remade with the new images, and it worked. However those are not the files used by the Wallpaper app and deleting the images folder won't get the same result. I have added Bing Desktop to the Firewall white list and the issue is still there. Any ideas? Currently I'm using DisplayFusion to place the wallpaper manually because the company doesn't allow change the wallpapers (policies). Note: I wrote to the DisplayFusion developers to suggest adding a feature to support Bing Wallpapers. They told me there was no API support to implement it but they will study this possibility (workaround) for the future: http://stackoverflow.com/questions/10639914/is-there-a-way-to-get-bings-photo-of-the-day

    Read the article

< Previous Page | 844 845 846 847 848 849 850 851 852 853 854 855  | Next Page >