Search Results

Search found 27973 results on 1119 pages for 'power point vba'.

Page 332/1119 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • A Question about .net Rfc2898DeriveBytes class?

    - by IbrarMumtaz
    What is the difference in this class? as posed to just using Encoding.ASCII.GetBytes(string object); I have had relative success with either approach, the former is a more long winded approach where as the latter is simple and to the point. Both seem to allow you to do the same thing eventually but I am struggling to the see the point in using the former over the latter. The basic concept I have been able to grasp is that you can convert string passwords into byte arrays to be used for e.g a symmetric encryption class, AesManaged. Via the RFC class but you get to use SaltValues and password when creating your rfc object. I assume its more secure but still thats an uneducated guess at best ! Also that it allows you to return byte arrays of a certain size, well something like that. heres a few examples to show you where I am coming from? byte[] myPassinBytes = Encoding.ASCII.GetBytes("some password"); or string password = "P@%5w0r]>"; byte[] saltArray = Encoding.ASCII.GetBytes("this is my salt"); Rfc2898DeriveBytes rfcKey = new Rfc2898DeriveBytes(password, saltArray); The 'rfcKey' object can now be used towards setting up the the .Key or .IV properties on a Symmetric Encryption Algorithm class. ie. RijndaelManaged rj = new RijndaelManaged (); rj.Key = rfcKey.Getbytes(rj.KeySize / 8); rj.IV = rfcKey.Getbytes(rj.Blocksize / 8); 'rj' should be ready to go ! The confusing part ... so rather than using the 'rfcKey' object can I not just use my 'myPassInBytes' array to help set-up my 'rj' object???? I have tried doing this in VS2008 and the immediate answer is NO ! but have you guys got a better educated answer as to why the RFC class is used over the other alternative I have mentioned above and why????

    Read the article

  • ASUS P6X58D and 12 GB of RAM

    - by Hunter
    I just assembled my new PC and it screams, BUT it's only recognizing 8 GB of RAM in BIOS and OS rather than the installed 12 GB. In the BIOS the RAM was registering at 1066 MHz so I set it to 1600 MHz. I updated the BIOS to the latest non-beta release. http://www.asus.com/product.aspx?P_ID=wurRaDZ8lo4Ckukj OS: Windows 7 Ultimate 64-bit Motherboard: ASUS P6X58D Premium CPU: Intel Core i7 920 RAM: CORSAIR XMS3 12 GB (6 x 2 GB) 240-Pin DDR3 SDRAM DDR3 1600 HDD: Intel 80 GB SSD SATA II Power supply: Kingwin 1000 W Modular I've installed the Beta BIOS 0808 but no luck!

    Read the article

  • Linux: Why to change inode size?

    - by FractalizeR
    Hello. Tune2fs allows to change inode size from default 128 bytes to almost anything (but it should be power of two). What can the the reasons of changing default inode size? Here http://kbase.redhat.com/faq/docs/DOC-7433 is written, that this can be done to be able to store ACL attributes inside inodes. What else can be stored inside inode? May be some other attributes? Or anything else? Is there any reason to increase inode size on modern high-capacity drives (2TB and more)?

    Read the article

  • Open Source webapp that shows PC / Projector status in 30+ Lecture rooms

    - by Seanchán
    I am looking for a simple web application that only has a simple graphical representation of the current status of 30+ lecture rooms. I.e. Green = good, Red=bad i.e. PC or Projector not working. With a little message and a ETA as well. I am not looking for monitoring software, merely a way for a tech to flag a room as "technically challenged" until 1PM or until "Friday 10AM". With a message for those lecturers who are interested: "Waiting on replacement bulb" or "Power supply gone" I know this is a simple thing to code up yourself, but I am looking for something that has been around for a few years that has some cool extra little functionality that you wouldn't think of yourself. I just can't find anything like that out there. And just to be clear: not monitoring software, more like lecturer feedback web app.

    Read the article

  • How to get rid of resume information on ubuntu 9.10 ("karmic")

    - by Glen S. Dalton
    I am on an old laptop with Ubuntu 9.10 installed. I once tried to not shutdown but go into one of the resume states. On the next power on, resume did not work, but there was an error message during boot asking me for the resume image (which I do not have or know of) and when I press enter the normal boot happens. But this error pops up on every boot now. How can I get back the behaviour from before? Why does the boot process guess there would be a resume image and can I delete this information? I would like to post here the error messages from the boot proces, but they are not in /var/log/syslog, where else can they be?

    Read the article

  • Can I provision half a core as a virtual CPU?

    - by ramdaz
    I am virtualization newbie. Please advise on these questions. Please note using a commercial VM software like Citrix or VMware is not a choice for me. I have at my disposal a couple of 2x 4 core servers with 32 GB RAM. I need to create 16 VMs on each server to test some web applications. Can I provision half a core as a virtual CPU for each VM? To my best knowledge I can't do so on Xen. Is it possible on KVM or some other free open source VM solution? If it's not possible to assign half a core, how do I ensure that uniform processing power is available for all VMs? Since the job is to create separate instances for hosting 16 web apps in a physical server, do you recommend setting up a private cloud using Ubuntu Enterprise Cloud as a better option? Is there HA solution under KVM, like Remus for Xen?

    Read the article

  • Cablemodem frequent connection loss

    - by LVDave
    I have a Linksys BEFCMU10 cablemodem and a WRT54GL router with Tomato 1.27 firmware on Cox cable. My question is this: I get what seems to be random disconnects from the internet, where the cable modem lights are still normal, but I can connect nowhere, either via a url or an ip address. At the same time these disconnects are happening, I can go to the router's Tomato management webpage, and release/renew my external IP address from Cox's DHCP server. I've had Cox look at the signal levels on the cable modem, and they say they look fine. What brings back the modem, for sometimes as long as 17 days, is several power-cycles of the modem. I don't understand the underlying cable modem technology too well, but I do know that if I'm able to release/renew the DHCP-provided WAN address, I'd expect that the cable modem was working ok... Anybody have any ideas??

    Read the article

  • Trouble move-capturing std::unique_ptr in a lambda using std::bind

    - by user2478832
    I'd like to capture a variable of type std::vector<std::unique_ptr<MyClass>> in a lambda expression (in other words, "capture by move"). I found a solution which uses std::bind to capture unique_ptr (http://stackoverflow.com/a/12744730/2478832) and decided to use it as a starting point. However, the most simplified version of the proposed code I could get doesn't compile (lots of template mistakes, it seems to try to call unique_ptr's copy constructor). #include <functional> #include <memory> std::function<void ()> a(std::unique_ptr<int>&& param) { return std::bind( [] (int* p) {}, std::move(param)); } int main() { a(std::unique_ptr<int>(new int())); } Can anybody point out what is wrong with this code? EDIT: tried changing the lambda to take a reference to unique_ptr, it still doesn't compile. #include <functional> #include <memory> std::function<void ()> a(std::unique_ptr<int>&& param) { return std::bind( [] (std::unique_ptr<int>& p) {}, // also as a const reference std::move(param)); } int main() { a(std::unique_ptr<int>(new int())); }

    Read the article

  • HP OfficeJet 5600 w/ Dlink DPR1260 not printing except from Notepad

    - by joelarson
    I have an OfficeJet5600 attached to a Dlink DPR1260 wireless print router. It has worked great for a long time. Today we had a power outage and the thing was inaccessible. I reset the DPR1260 and reinstalled drivers and soforth. I can now print from Notepad, but not from any Office programs, IE, or Chrome. I have toggled on and off many settings such as spooling/not spooling, etc. Nothing has helped. Any ideas? I have Windows Vista.

    Read the article

  • Remote Desktop or Streaming Software/Services that Supports Gaming

    - by Griffin
    I've simply been amazed by the quality and speed of Onlive, as this technology has the potential of making hardware requirements irrelevant to the average user. However, at the moment Onlive is only for remotely controlling video games, and not desktops or other devices in general. I'm in pursuit of software or services that can accomplish this as well as Onlive does. I need: viewer (client) program portability (able to run on a USB stick) DirectX, OpenGL / full-screen game compatibility on the server side.** Gaming-acceptable color/scaling quality and responsiveness. I have a very powerful desktop at home and I want to be able to access this raw power from any other computer that I stick my USB into (in the same way Onlive gives gamers use of their powerful servers) What software/services has most of the above? NOTE: please specify what features your suggestion doesn't have.

    Read the article

  • what firefall linux distro applicance could track internet usage per device in my home?

    - by GregH
    Hello, Anyone know of a community edition/open source/free firewall/gateway software product that I could install onto an old PC to act as my firewall/gateway/proxy etc, BUT for which it has the power to track internet usage per device in my home. So: a) Mandatory - Track internet usage for devices on my home network on a per device basis (e.g. various PCs/Xbox etc) b) Mandatory - Report/graph would would give a breakdown of internet usage, per device (e.g. IP address), per day. c) Desirable - as in b) above but per hour d) Desirable - realtime graph (e.g. 5 minute measurement intervals or something) that shows current internet usage per device e) Mandatory - Handles all internal<=internet requests for all protocols (e.g. HTTP, HTTPS, xbox etc) f) Mandatory - No explicit settings in clients required - i.e. Transparent Monitoring concept (for both HTTP and non-HTTP traffic like xbox, skype etc) g) Mandatory - easy "appliance" like installation onto a dedicated low spec PC thanks in advance

    Read the article

  • what firefall linux distro applicance could track internet usage per device in my home?

    - by GregH
    Hello, Anyone know of a community edition/open source/free firewall/gateway software product that I could install onto an old PC to act as my firewall/gateway/proxy etc, BUT for which it has the power to track internet usage per device in my home. So: a) Mandatory - Track internet usage for devices on my home network on a per device basis (e.g. various PCs/Xbox etc) b) Mandatory - Report/graph would would give a breakdown of internet usage, per device (e.g. IP address), per day. c) Desirable - as in b) above but per hour d) Desirable - realtime graph (e.g. 5 minute measurement intervals or something) that shows current internet usage per device e) Mandatory - Handles all internal<=internet requests for all protocols (e.g. HTTP, HTTPS, xbox etc) f) Mandatory - No explicit settings in clients required - i.e. Transparent Monitoring concept (for both HTTP and non-HTTP traffic like xbox, skype etc) g) Mandatory - easy "appliance" like installation onto a dedicated low spec PC thanks in advance

    Read the article

  • Is it possible to have a computer run two OS's in memory at the same time?

    - by Hebon
    I'm sick of needing to reboot my computer every time I wish to use another OS, or run a virtual machine that skimps on power. With the onset of large amounts of memory for computers nowadays I began to think that there must be some way to run two OS's in memory with a way to switch between the two. In my mind, it doesn't seem too difficult; a compatibility layer boots up after bios, which in turn boots to OS1. While in OS1, software is run that triggers a save to ram boots back to the compatability layer, and then boots to OS2. This way, the OS's can be used side by side and boot times are cut drastically short since both OSs are already in ram. Both OS's have their own designated and protected memory so there is no problem there... I mean, it seems fine, but no one has done it, so there must be some reason as to why. I would love some insight into this please.

    Read the article

  • Team Foundation Server - A programmer's guide

    - by Filip Ekberg
    In addition to my Previous topic on How to use SVN, Branch? Tag? Trunk? I would like to get in-depth on how a programmer should/could use TFS. The things that are most interesting to me is not how to set up the server, rather how you use it on a daily basis. In the area of software engineering where your responsibility not only lies on code but achitecture, documentation and other fields, you need to have a collection of your work, prefferably on the same place. So these are my point of interest which I would like to get more knowledge about How would you strucuter a TFS Workspace / Project to support lots of different customers / projects and maybe different projects per customer? Splitting up the folder strucutre on the above project into different pieces such as, Code, Documents - Architecture, Requirements and other, what more could there be and what would be a nice commonly used folder structure? An easy to browse repository; Again the folder structure here is important however this point is more aimed at different Explorers for the repository, not only the built in Team Foundation Explorer. These are just a couple of the points that I would like to know more about, suggestions on Beginners guides, in-depth guides and links covering the above would be very much helpful, please feel free to add other important knowledge-points to this as well.

    Read the article

  • rails + compass: advantages vs using haml + blueprint directly

    - by egarcia
    I've got some experience using haml (+sass) on rails projects. I recently started using them with blueprintcss - the only thing I did was transform blueprint.css into a sass file, and started coding from there. I even have a rails generator that includes all this by default. It seems that Compass does what I do, and other things. I'm trying to understand what those other things are - but the documentation/tutorials weren't very clear. These are my conclusions: Compass comes with built-in sass mixins that implement common CSS idioms, such as links with icons or horizontal lists. My solution doesn't provide anything like that. (1 point for Compass). Compass has several command-line options: you can create a rails project, but you can also "install" it on an existing rails project. A rails generator could be personalized to do the same thing, I guess. (Tie). Compass has two modes of working with blueprint: "basic" and "semantic" usage. I'm not clear about the differences between those. With my rails generator I only have one mode, but it seems enough. (Tie) Apparently, Compass is prepared to use other frameworks, besides blueprint (e.g. YUI). I could not find much documentation about this, and I'm not interested on it anyway - blueprint is ok for me (Tie). Compass' learning curve seems a bit stiff and the documentation seems sparse. Learning could be a bit difficult. On the other hand, I know the ins and outs of my own system and can use it right away. (1 point for my system). With this analysis, I'm hesitant to give Compass a try. Is my analysis correct? Are Am I missing any key points, or have I evaluated any of these points wrongly?

    Read the article

  • Robot Simulation in Java

    - by Eddy Freeman
    Hi Guys, I am doing a project concerning robot simulation and i need help. I have to simulate the activities of a robot in a warehouse. I am using mindstorm robots and lego's for the warehouse. The point here is i have to simulate all the activities of the robot on a Java GUI. That is whenever the robot is moving, users have to see it on the GUI a moving object which represents the robot. When the roads/rails/crossings of the warehouse changes it must also be changed on the screen. The whole project is i have to simulate whatever the robot is doing in the warehouse in real-time. Everything must happen in real-time I am asking which libraries in Java i can use to do this simulations in real-time and if someone can also point me to any site for good information. Am asking for libraries in Java that i can use to visualize the simulation in real-time. All suggestions are welcome. Thanks for your help.

    Read the article

  • Increase application performance on Amazon AWS

    - by Honus Wagner
    I've got a client with an MVC v1 (.NET) application running on a micro instance. On this instance, I've got .NET, IIS 7.5, and MS SQL Server 2008 running to handle the application. The client has reported that it is taking nearly 10 seconds to process each request. Even loading the initial login page takes about that long, then logging in takes that long, etc etc. The currently running instance specs are as follows: 615 MB RAM Intel Xenon CPU E5430 @ 2.66GHz 2.78 GHz 64-Bit Is the memory availability the issue? or is it the processing power? I forsee two options: Change to a larget instance Set up a 2-tier architecture with two micro instances Which of these will give the application better performance? Thanks in advance.

    Read the article

  • 2 ATI Radeon HD 5870 (crossfire): Intermittent loud fan and non-functioning secondary card

    - by Merritt
    Installed 2 5870s. Left off the left side panel of my case(cooler master haf x), started my computer, installed the drivers and set up the crossfire... yada yada yada... everything worked fine. Then turned off computer, put left side panel back on, turned on computer, and heard a very loud buzzing (asssuming it was a gpu fan). Checked status of cards (in Windows 7) and the secondary card did not appear. Turned off the computer, took off left side panel, jiggled power connectors into secondary card, then turned the computer back on, and it worked again. Put the panel on, turned on computer, worked. Sometime later, turned on computer, and there's that loud fan sound again. What gives?

    Read the article

  • UX question: is better to have "serious delete" or have "trash"

    - by ftrotter
    I am developing an application that allows for a user to manage some individual data points. One of the things that my users will want to do is "delete" but what should that mean? For a web application is it better to present a user with the option to have serious delete or to use a "trash" system? Under "serious delete" (would love to know if there is a better name for this...) you click "delete" and then the user is warned "this is final and tragic action. Once you do this you will not be able to get -insert data point name here- back, even if you are crying..." Then if they click delete... well it truly is gone forever. Under the "trash" model, you never trust that the user really wants to delete... instead you remove the data point from the "main display" and put into a bucket called "the trash". This gets it out of the users way, which is what they usually want, but they can get it back if they make a mistake. Obviously this is the way most operating systems have gone. The advantages of "serious delete" are: Easy to implement Easy to explain to users The disadvantages of "serious delete" are: it can be tragically final sometimes, cats walk on keyboards The advantages of the "trash" system are: user is safe from themselves bulk methods like "delete a bunch at once" make more sense saves support headaches The disadvantages of the "trash" system are": For sensitive data, you create an illusion of destruction users think something is gone, but it is not. Lots of subtle distinctions make implementation more difficult Do you "eventually" delete the contents of the trash? My question is which one is the right design pattern for modern web applications? With enough discussion to justify your answer... Would love to be pointed towards some relevant research. -FT

    Read the article

  • Merit and demerits for various Linux fiberchannel multipath options

    - by wzzrd
    On our Linux servers, we currently use HPs qla2xxx drivers, because it has multipathing (active/passive) built in. The are, however, various other options, like Red Hats device-mapper-multipath with the stock qla2xxx drivers (multibus and failover) and things like SecurePath and PowerPath (both of which can do trunking, iirc). Can someone tell me what the merits and demerits of the various options are (if I can ask such a question), besides the obvious fact that the {Secure,Power}Path options cost vast amounts of money? I'm mainly interested in the freely available options, like HPs qla2xxx vs. Red Hats multipathd and possible other open source solutions, but I would like to hear good reasons to go for the commercial solutions too. UPDATE: I'll be benchmarking various options the coming few days (the average of 10 runs of iozone for each option (options being native qla2xxx failver, native qla2xxx multibus, HP qla2xxx failover)). I'll post a summary of results here for those interested.

    Read the article

  • An SVN error (200 OK) when checking out from my online repo

    - by J. LaRosee
    I'm trying to setup my first repo on my host and am getting this error when I use Tortoise to checkout the project: Error: OPTIONS of 'http://mywebsite.com/svn/myproject': 200 OK (http://mywebsite.com) Here is what I did: 1) ssh into my host and head to /home/myaccnt and 'svnadmin create svn' 2) create my project repo: 'svn mkdir svn/myproject' 3) add files to the repo: cd /home/myaccnt/.../myproject (which has /tags, /branch, /trunk); 'svn import file:///home/myaccnt/svn/myproject' (the big ole list of files being added is seen at this point.) At this point I think that I've setup my repo and imported my project into the repo. So, I'm ready to checkout using TortoiseSVN on my Windows box. So: 4) In the folder I'd like to checkout to, I rightclick and 'SVN Checkout' and then make sure my URL is: http://mywebsite.com/svn/myproject Result? Error: OPTIONS of 'http://mywebsite.com/svn/myproject': 200 OK (http://mywebsite.com) Anyone have any thoughts for me? I'm likely missing something fundemental w/ the structure of my repo or htaccess... or something. Many thanks in advance. -JL

    Read the article

  • Anyone know of a good way to sell used servers?

    - by RandyMorris
    We have a couple of servers we no longer need now that we are fully hosted on a managed host (rackspace). They were purchased for over $10,000 each but we realize that over time their monetary value drops. Anyone have suggestions or experience selling these in a proper way? They are dual xeon processor 2U rack mountable with 4+GB RAM, intel boards, 6x 72GB 15,000 RPM SCSI Drives with raid controller redundant power supply. We are in Southern California area. I can be more specific on any information if there is interest. I know there is ebay and the like but these servers are like the family dog that has to be given up and we are looking for a proper home for a fair price. I will end up auctioning it off if need be in the end though. Thanks in advanced for any help!

    Read the article

  • Suse 12.3 cannot boot after a forced shutdown

    - by David Dai
    I was doing a system update using zypper update After a while the screen was filled with this message failed to start system logging service. and the system was not responding. I had to shutdown it by holding the power button. then I started the machine again, and selected to boot suse. Then I saw the fancy boot animation(some shiny big dots gathering to the center of the screen), then the screen just turned black and the monitor sayed "no signal". then I tried to boot into suse failsafe mode, which was fine. how can I investigate into this problem?

    Read the article

  • Would OpenID or OAuth work for authorization/authentication on a distributed web service?

    - by David Eyk
    We're in the early stages of designing a RESTful/resource-oriented web service API for a computational lingustics application. Because many of the resources we plan to serve are rights-encumbered, a key design decision has been to specify the platform so that each resource provider can expose their own web service that complies with the API spec. This way, the rights owner maintains control over their content (and thus the ability to throttle or deny access at will) and a direct relationship with the consumer, while still being able to participate in in the collaborative network. At the same time, to simplify the job of writing a client for this service, we want to allow a client access to the distributed service through one end-point, with the server handling content negotiation and retrieval from the appropriate providers. Right now, we're at an impasse on authentication/authorization schemes. One of our number has argued for the (technical) simplicity of a central authentication registry, but others are concerned about the organizational complexity of such a scheme. It seems to me, based on an albeit limited understanding of the technologies, that a combination of OpenID and OAuth would do the trick, with a client authenticating with the end-point via OpenID, and the server taking action on the user's behalf with the various content providers using OAuth. I've only ever seen implementations (e.g. stackoverflow, twitter, etc.) where a human was present to intervene, and I still need to do more research on these technologies. Would a scheme like this work for an automated web service, or would it make the client too difficult to implement and operate?

    Read the article

  • Creating a mouse drag done observable with Reactive Extensions

    - by juharr
    I have the following var leftMouseDown = Observable.FromEvent<MouseButtonEventArgs>(displayCanvas, "MouseLeftButtonDown"); var leftMouseUp = Observable.FromEvent<MouseButtonEventArgs>(displayCanvas, "MouseLeftButtonUp"); var mouseMove = Observable.FromEvent<MouseEventArgs>(displayCanvas, "MouseMove"); var leftMouseDragging = from down in leftMouseDown let startPoint = down.EventArgs.GetPosition(displayCanvas) from move in mouseMove.TakeUntil(leftMouseUp) let endPoint = move.EventArgs.GetPosition(displayCanvas) select new { Start = startPoint, End = endPoint }; which when I subscribe to it will give me the start point of the drag and the current end point. Now I need to do something once the drag is done. I was unsuccessful in attempting to do this completely with RX and ended up doing leftMouseDragging.Subscribe(value=> { dragging = true; //Some other code }); leftMouseUp.Subscribe(e=> { if(dragging) { MessageBox.Show("Just finished dragging"); dragging = false; } }); This works fine until I do a right mouse button drag. Then when I click the left mouse button I get the message box. If I only do a left button drag I get the message box, and then clicking the left mouse button doesn't produce the box. I'd like to do this without the external state, but if nothing else I'd at least like for it to work properly. FYI: I tried making dragging volatile and using a lock, but that didn't work. EDIT It turns out my problem was with a right click context menu. Once I got rid of that my above code worked. So, now my problem is how to I get to have the context menu and still have my code work. I assume the Context menu was handling the left mouse click and that somehow caused my code to not work, but I'm still puzzling it out.

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >