Search Results

Search found 35433 results on 1418 pages for 'document based'.

Page 341/1418 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • I can't remove a background image in Powerpoint 2007

    - by computergeek
    Hello Everyone! I have a Powerpoint 2007 document. There is this annoying background graphic in my Powerpoint slide. I know about slide master, but this little bugger is not something that I can delete from slide master. Argh!!!! The graphic is a little pumpkin at the bottom of the slide and my company logo at the top. http://www.violetonresumes.com/HelpMeRemoveThisGraphic.pptx Here is a copy of the document. I've already lost two hours of my life on this one. Any help would be greatly appreciated. Cheers

    Read the article

  • Lifecycle of an ASP.NET MVC 5 Application

    Here you can download a PDF Document that charts the lifecycle of every ASP.NET MVC 5 application, from receiving the HTTP request to sending the HTTP response back to the client. It is designed both as an educational tool for those who are new to ASP.NET MVC and also as a reference for those who need to drill into specific aspects of the application. The PDF document has the following features: Relevant HttpApplication stages to help you understand where MVC integrates into the ASP.NET application lifecycle. A high-level view of the MVC application lifecycle, where you can understand the major stages that every MVC application passes through in the request processing pipeline. A detail view that shows drills down into the details of the request processing pipeline. You can compare the high-level view and the detail view to see how the lifecycles details are collected into the various stages. Placement and purpose of all overridable methods on the Controller object in the request processing pipeline. You may or may not have the need to override any one method, but it is important for you to understand their role in the application lifecycle so that you can write code at the appropriate life cycle stage for the effect you intend. Blown-up diagrams showing how each of the filter types (authentication, authorization, action, and result) is invoked. Link to a useful article or blog from each point of interest in the detail view. span.fullpost {display:none;}

    Read the article

  • Stacks in C++

    - by MarkPearl
    So some more basics… One of the things you will be taught at any college after conquering arrays is different derivatives of collections. Stack is one of the simplest of those and very useful… A stack is a LIFO (last in first out) data structure and has at least two basic method calls – push & pop. Push, “pushes” an item on the top of the stack. Pop, removes the top most item off the stack. Because all elements on a stack are of the same type, one can use an array to implement a stack or a linked list. With the array based approach, the first element in a stack would be the first element in the array, the second on the stack would be the second on the array, etc. One limitation with an array implementation of a stack is that unless the array is dynamic, one would have to have a preset max stack size (based on the bounds of the array). Linked lists is another approach that gets past this boundary by allowing you to dynamically grow or shrink a collection of data. Stacks have many applications… a typical computer science example would be Postfix Expression Calculator, where the LIFO principle is maintained.

    Read the article

  • Commercial NAS RAID1 disks moved to Software Raid system?

    - by Rolnik
    I've got a couple of commercial NAS boxes and I'm wondering if they (ReadyNas duo, DLink DNS-323) or any other NAS is suitable for having their RAIDed disks moved to a software-based NAS. To be specific, I'm a big fan of the (largely) Debian-based Ubuntu. Can the aforementioned NAS drives be migrated to Ubuntu (e.g. using the mdadm Linux command)? Secondly, is there any commercial NAS that can be migrated over? Incidentally, here is a link to somebody who succeeded in a migration: http://www.linuxquestions.org/questions/slackware-14/moving-raid1-drives-into-computer-with-same-md-numbers-862312/ My specific scenario I'd like to prepare for, is the eventual (sudden) death of one of the NAS motherboards.

    Read the article

  • Recommend an SFTP solution for Windows (Server & Client) that integrates well with Dreamweaver

    - by aaandre
    Could you please recommend a secure FTP solution for updating files on a remote windows server from windows workstations? We would like to replace the FTP-based workflow with a secure FTP one. Windows Server 2003 on the remotely hosted webserver, WinXP on the workstations. We manage the files via Dreamweaver's built-in FTP. My understanding is Dreamweaver supports sFTP out of the box so I guess I am looking for a good sFTP server for Windows Server 2003. Ideally, that would not require cygwin. Ideally, the solution would use authentication based on the existing windows accounts and permissions. Thank you!

    Read the article

  • Open Source Software Development Center at University of Belgrade

    - by Tori Wieldt
    A new Open Source Software Development Center is open at University of Belgrade, Serbia. It centers around using Java & NetBeans as open source projects to learn from and contribute to. Assistant Professor Zoran Sevarac says that not only does the center allow him to teach software development using open source projects, but also "we are improving our University courses based on the experience we get from working on open source code."  Some of the projects underway are a NetBeans UML plugin; Neuroph (a Java neural network framework, with a NetBeans Platform-based UI); a NetBeans DOAP Plugin; WorkieTalkie (NetBeans chat plugin); and 2D and 3D visualization plugins for NetBeans. University of Belgrade also has an official university course about open source development, where students learn to use development tools, work in teams, participate in open source projects and learn from real world software development projects. Students, teachers, and researchers at the University of Belgrade, and any member of the open source community are welcome to come to learn software development from successful open source projects. For more information, you can contact Zoran Sevarac (@neuroph on Twitter).

    Read the article

  • Backup Exec backup-to-disk folder creation - Access denied

    - by ewwhite
    I'm having a difficult time creating a backup-to-disk folder in Symantec Backup Exec 12.5 and Backup Exec 2010. The backend storage is a Nexenta/ZFS-based NAS filer sharing the volume via CIFS. I've also seen the issue on other *nix-based NAS devices. I've attempted mapping the drive, providing the full paths to the folder, etc. I can browse to the share just fine from within Windows, but Backup Exec fails to create the B2D folder with different variants of a Unable to create new backup folder. Access denied error. I've attempted creating service accounts in Backup Exec to handle the authentication, but nothing seems to work. What's the key to making this work?

    Read the article

  • Copy/move EBS volume from one Region to another

    - by Gnanam
    Background of our setup: We've hosted our web-based application in Amazon EC2 US East (Virginia) Region. Our instance is based on Linux distribution (CentOS) and AMI is S3-backed. 1 EBS volume (400 GB size) is attached to this instance. Question: We've planned to migrate our deployment to US West (N. California) Region. From AWS doc, I understood that for moving AMI, there is a command-line tool available - ec2-migrate-bundle. But for moving EBS volume across Region, currently there is no tool available. I'm looking for easiest and/or fastest way of copying/moving EBS volume from one Region to another. Also, are there any hidden risks involved during and/or after the migration? Experts ideas/suggestions/recommendations on this are highly appreciated.

    Read the article

  • Authorization design-pattern / practice?

    - by Lawtonfogle
    On one end, you have users. On the other end, you have activities. I was wondering if there is a best practice to relate the two. The simplest way I can think of is to have every activity have a role, and assign every user every role they need. The problem is that this gets really messy in practice as soon as you go beyond a trivial system. A way I recently designed was to have users who have roles, and roles have privileges, and activities require some combinations of privileges. For the trivial case, this is more complex, but I think it will scale better. But after I implemented it, I felt like it was overkill for the system I had. Another option would be to have users, who have roles, and activities require you to have a certain role to perform with many activities sharing roles. A more complex variant of this would given activities many possible roles, which you only needed one of. And an even more complex variant would be to allow logical statements of role ownership to use an activity (i.e. Must have A and (B exclusive or C) and must not have D). I could continue to list more, but I think this already gives a picture. And many of these have trade offs. But in software design, there are oftentimes solutions, while perhaps not perfect in every possible case, are clearly top of the pack to an extent it isn't even considered opinion based (i.e. how to store passwords, plain text is worse, hashing better, hashing and salt even better, despite the increased complexity of each level) (i.e. 2, Smart UI designs for applications are bad, even if it is subjective as to what the best design is). So, is there a best practice for authorization design that is not purely opinion based/subjective?

    Read the article

  • How can I "diff" two files with Nautilus?

    - by bioShark
    I have installed Meld and found out it's a great comparing tool. Unfortunately there is no integration with Nautilus 3.2. This means, I can't right click on files and select an option to open them in Meld for comparison. I have seen in the tools comment that the tool need the diff-ext package to be installed. This package has been removed from Ubuntu universe, I am guessing because gtk 3.0. Even if I manually downloaded from source forge the diff-ext package, when I try to configure it the check fails with the message: checking for DIFF_EXT... configure: error: Package requirements (libnautilus-extension >= 2.14.0 gconf-2.0 >= 2.14.0 gnome-vfs-module-2.0 >= 2.14) were not met: No package 'libnautilus-extension' found No package 'gconf-2.0' found No package 'gnome-vfs-module-2.0' found Ok, so from this output I gather that indeed gtk 2 is being required to install the diff extension to nautilus. Now, my question is: Is there a possibility to integrate Meld into Nautilus? Or, are there any other diff based tool which integrate with current Nautilus? So gtk3 based. I am using Ubuntu 11.10 if there was any doubt so far. cheers and thanks in advance.

    Read the article

  • What emulator / VM software can I use to create a Win32-portable Linux Guest?

    - by Jotham
    Hi, I want to create a portable VM setup so that I can boot a Linux install regardless of which Windows XP / Windows 7 host machine I am on. I was looking at Qemu but it doesn't appear to have a relatively safe win32 build. Other things like VirtualBox require complete install on the host OS for performance reasons. I'm not so concerned about performance, I just want to run a few curses based applications. My ideal end goal would be a a memory stick of some size with a VM/Emulator I can boot on most WinXP/Windows 7 machines and access my own curses based applications (probably archlinux or debian). Any help would be appreciated. Regards,

    Read the article

  • Computer prints blank pages before and after content

    - by Cpt. Jack
    This would seem like a pretty simple question but I have exhausted every idea I can come up with. I bought a brand new Dell Latitude E5410 not too long ago with Windows 7 OS. I installed office 2010 on the machine right away and have had a printing problem since day one. For some reason every time I print a page, a blank page prints out before and after the content print. This also applies to any other application such as notepad or printing an email. If I have a 6 page document, it still prints out one page before and after every content page. Meaning I get my 6 page document along with 12 blank pages. I can't figure out why this would be some sort of default setting or what would cause this printing configuration. I am the only computer on the network that has this problem and quite frankly I'm getting tired of it. Can anyone help me figure this out or steer me in the right direction to correcting this problem?

    Read the article

  • How to scale out OpenStreetMap data efficiently

    - by Pierre
    For over a year now, I'm running an in-house PostGIS server filled with OSM data, used for both Mapnik-based tile generation and Nominatim-based geocoding, updated with day replicates. This works pretty well. However, as usage is growing exponentially, I would like to achieve better reliability and performance by adding additional PostgreSQL servers. And I'm kind of lost. Since PostgreSQL doesn't seem to handle replication by itself, I would think about using a piede of middleware like PgPool-II to keep the servers in sync. But I'm afraid it would be nothing but necessary for this usage : very high read-to-write ratio, where all writes are done at the same exact time every day. My questions are simple : What would you do to keep these servers in sync? And, what is done for this at the OpenStreetMap Foundation, MapQuest, Mapbox or CloudMade? Thanks.

    Read the article

  • How to make and restore incremental snapshots of hard disk

    - by brunopereira81
    I use Virtual Box a lot for distro / applications testing purposes. One of the features I simply love about it is virtual machines snapshots, its saves a state of a virtual machine and is able to restore it to its former glory if something you did went wrong without any problems and without consuming your all hard disk space. On my live systems I know how to create a 1:1 image of the file system but all the solutions I'v known will create a new image of the complete file system. Are there any programs / file systems that are capable of taking a snapshot of a current file system, save it on another location but instead of making a complete new image it creates incremental backups? To easy describe what I want, it should be as dd images of a file system, but instead of only a full backup it would also create incremental. I am not looking for clonezilla, etc. It should run within the system itself with no (or almost none) intervention from the user, but contain all the data of the file systems. I am also not looking for a duplicity backup your all system excluding some folders script + dd to save your mbr. I can do that myself, looking for extra finesse. I'm looking for something I can do before doing massive changes to a system and then if something when wrong or I burned my hard disk after spilling coffee on it I can just boot from a liveCD and restore a working snapshot to a hard disk. It does not need to be daily, it doesn't even need a schedule. Just run once in a while and let it its job and preferably RAW based not file copy based.

    Read the article

  • Microeconomical simulation: coordination/planning between self-interested trading agents

    - by Milton Manfried
    In a typical perfect-information strategy game like Chess, an agent can calculate its best move by searching the state tree for the best possible move, while assuming that the opponent will also make the best possible move (i.e. Mini-max). I would like to use this approach in a "game" modeling economic activity, where the possible "moves" would be to buy or sell for a given price, and the goal, rather than a specific class of states (e.g. Checkmate), would be to maximize some function F of the agent's state (e.g. F(money, widget) = 10*money + widget). How to handle buy/sell actions that require coordination between both parties, at the very least agreement upon a price? The cheap way out would be to set the price beforehand, maybe based upon the current supply -- but the idea of this simulation is to examine how prices emerge when freely determined by "perfectly rational" agents. A great example of what I do not want is the trading algorithm in SugarScape -- paraphrasing from Growing Artificial Societies p101-102: when a pair of agents interact to trade, they each compute their internal valuations of the goods, then a bargaining process is conducted and a price is agreed to. If this price makes both agents better off, they complete the transaction The protocol itself is beautiful, but what it cannot capture (as far as I can tell) is the ability for an agent to pay more than it might otherwise for a good, because it knows that it can sell it for even more at a later date -- what appears to be called "strategic thinking" in this pape at Google Books Multi-Agent-Based Simulation III: 4th International Workshop, MABS 2003... to get realistic behavior like that, it seems one would either (1) have to build an outrageously-complex internal valuation system which could at best only cover situations that were planned for at compile-time, or otherwise (2) have some mechanism to search the state tree... which would require some way of planning future trades. Note: The chess analogy only works as far as the state-space search goes; the simulation isn't intended to be "zero sum", so a literal mini-max search wouldn't be appropriate -- and ideally, it should work with more than two agents.

    Read the article

  • How IBM Implement WebSphere Application Server SDK for Sun Solaris OS

    - by Eng Al-Rawabdeh
    I deploy the same application in IBM-WAS on different OS ( Windows , AIX and SUN-Solaris ) , SDK errors appeared on SDK for just Solaris OS , I refer some sites and it talk that the SDK on Solaris OS was build based on Sun SDK is it write ? so please I need to now if the IBM build the Solaris SDK from scratch or based on sun SDK ?? More Details : I Installed the same IBM WAS Application Server on two servers as the following : 1- Server1 - OS (AIX) 2- Server2 - OS ( Solaris) these two server on the same network and have the same configuration . Then I deploy Java Application ( X ) on both servers , the Application X was run on Server1 ( AIX ) without any problem but when I run the Application on Server 2 ( Solaris OS) I faced SDK issue . So I need to know what the difference between AIX WAS SDK and Solaris WAS SDK ?? Note : I try windows and it was run without any problem .

    Read the article

  • Customers Deploying Sun Oracle Database Machine

    - by kimberly.billings
    Philippine Savings Bank (PS Bank) recently deployed the Sun Oracle Database Machine to underpin its enterprise-wide analytics platform. Now, the response times for queries and requests that used to take from three hours to several days is completed in less than one minute with near real-time updates. Read the press release. EFU General Insurance also announced this week that they have deployed the Sun Oracle Database Machine. With Oracle, EFU will be able to open more sales channels via the Web and facilitate integration with other companies. As a result, more quality services can be offered to its customers via the Web because of the more agile and reliable IT infrastructure. In addition, a centralized IT environment will offer the EFU management a real time view of key information, enabling EFU to analyze business trends and make timely decisions. Read the press release. Let us know about your Sun Oracle Database deployment! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Is this an acceptable approach to undo/redo in Python?

    - by Codemonkey
    I'm making an application (wxPython) to process some data from Excel documents. I want the user to be able to undo and redo actions, even gigantic actions like processing the contents of 10 000 cells simultaneously. I Googled the topic, and all the solutions I could find involves a lot of black magic or is overly complicated. Here is how I imagine my simple undo/redo scheme. I write two classes - one called ActionStack and an abstract one called Action. Every "undoable" operation must be a subclass of Action and define the methods do and undo. The Action subclass is passed the instance of the "document", or data model, and is responsible for committing the operation and remembering how to undo the change. Now, every document is associated with an instance of the ActionStack. The ActionStack maintains a stack of actions (surprise!). Every time actions are undone and new actions are performed, all undone actions are removed for ever. The ActionStack will also automatically remove the oldest Action when the stack reaches the configurable maximum amount. I imagine the workflow would produce code looking something like this: class TableDocument(object): def __init__(self, table): self.table = table self.action_stack = ActionStack(history_limit=50) # ... def delete_cells(self, cells): self.action_stack.push( DeleteAction(self, cells) ) def add_column(self, index, name=''): self.action_stack.push( AddColumnAction(self, index, name) ) # ... def undo(self, count=1): self.action_stack.undo(count) def redo(self, count=1): self.action_stack.redo(count) Given that none of the methods I've found are this simple, I thought I'd get the experts' opinion before I go ahead with this plan. More specifically, what I'm wondering about is - are there any glaring holes in this plan that I'm not seeing?

    Read the article

  • Set up a root server using Ubuntu and Virtualization

    - by Daniel Völkerts
    Hello, I'd like to setup a fresh root server and install a linux based virtualization on it. My thoughts are on: Intel VTs Hardware Ubuntu 9.10 KVM based virt. The access to the root server will only be SSH for Administration. Has anybody done this before, what was your glues discovered in the daily use? My requirements are: very secure, so the root server only has ssh to the dom-0 and minimalistic ports for the guest (e.g. http/s). good monitoring of host/guest (my idea is to using zabbix for it) easy and fast administration (how are the command line tools working for you? cryptiv? high learning curve?) I'm pleased to learn from your suggestions. Regards, Daniel Völkerts

    Read the article

  • Writing Acceptance test cases

    - by HH_
    We are integrating a testing process in our SCRUM process. My new role is to write acceptance tests of our web applications in order to automate them later. I have read a lot about how tests cases should be written, but none gave me practical advices to write test cases for complex web applications, and instead they threw conflicting principles that I found hard to apply: Test cases should be short: Take the example of a CMS. Short test cases are easy to maintain and to identify the inputs and outputs. But what if I want to test a long series of operations (eg. adding a document, sending a notification to another user, the other user replies, the document changes state, the user gets a notice). It rather seems to me that test cases should represent complete scenarios. But I can see how this will produce overtly complex test documents. Tests should identify inputs and outputs:: What if I have a long form with many interacting fields, with different behaviors. Do I write one test for everything, or one for each? Test cases should be independent: But how can I apply that if testing the upload operation requires that the connect operation is successful? And how does it apply to writing test cases? Should I write a test for each operation, but each test declares its dependencies, or should I rewrite the whole scenario for each test? Test cases should be lightly-documented: This principles is specific to Agile projects. So do you have any advice on how to implement this principle? Although I thought that writing acceptance test cases was going to be simple, I found myself overwhelmed by every decision I had to make (FYI: I am a developer and not a professional tester). So my main question is: What steps or advices do you have in order to write maintainable acceptance test cases for complex applications. Thank you.

    Read the article

  • Running Multiple sites with multiple domains apache

    - by PsychoData
    I am having a rough time running apache and using multiple domain names here is a snippet of my config file. I keep getting a error saying that NameVirtualHost has no VirtualHosts. I want them both running on the same IP and I'm not sure why this doesn't work. I've been digging through the documentation for VirtualHosts, NameVirtualHost, and apache's page about name based virtual hosting. That example in the name based page is almost exactly my config! What am I doing wrong? Listen *:80 NameVirtualHost *:80 <VirtualHost *:80> ServerName www.sample1.net DocumentRoot /var/www/sample1-net </VirtualHost> <VirtualHost *:80> ServerName www.example2.net DocumentRoot /var/www/example2-net </VirtualHost>

    Read the article

  • Frame Independent Movement

    - by ShrimpCrackers
    I've read two other threads here on movement: Time based movement Vs Frame rate based movement?, and Fixed time step vs Variable time step but I think I'm lacking a basic understanding of frame independent movement because I don't understand what either of those threads are talking about. I'm following along with lazyfoo's SDL tutorials and came upon the frame independent lesson. http://lazyfoo.net/SDL_tutorials/lesson32/index.php I'm not sure what the movement part of the code is trying to say but I think it's this (please correct me if I'm wrong): In order to have frame independent movement, we need to find out how far an object (ex. sprite) moves within a certain time frame, for example 1 second. If the dot moves at 200 pixels per second, then I need to calculate how much it moves within that second by multiplying 200 pps by 1/1000 of a second. Is that right? The lesson says: "velocity in pixels per second * time since last frame in seconds. So if the program runs at 200 frames per second: 200 pps * 1/200 seconds = 1 pixel" But...I thought we were multiplying 200 pps by 1/1000th of a second. What is this business with frames per second? I'd appreciate if someone could give me a little bit more detailed explanation as to how frame independent movement works. Thank you.

    Read the article

  • Is there a framework for describing object oriented communication standards/protocols?

    - by martin
    Currently I'm dealing with the development of specifications for communication standards/protocols for b2b-integration based on object oriented models. I.e. if you take a look at the healthcare domain there is HL7v3 with its HDF. Now I ask if there is a more generic framework, that describes how a specification for a communication standard should be developed. For b2b-integration I want to describe a communication standard based on uml models for a broad domain. My thought was to divide the domain into subdomains and derive message type from the resulting model. There is already a given framework, but I want to compare it to another framework. My idea is to compare them using a generic framework. It should describe several levels. Does anybody know such a framework? I have searched a while on google scholar, but haven't an appropiate framework yet. The only thing I have found is ebXML, but I think it is not exactly what I need.

    Read the article

  • What reasons are there to reduce the max-age of a logo to just 8 days? [closed]

    - by callum
    Most websites set max-age=31536000 (1 year) on the Cache-control headers of static assets such as logo images. Examples: YouTube Yahoo Twitter BBC But there is a notable exception: Google's logo has max-age=691200 (8 days). I've checked the headers on the Google logo in the past, and it definitely used to be 1 year. (Also, it used to be part of a sprite, and now it is a standalone logo image, but that's probably another question...) What could be valid technical reasons why they would want to reduce its cache lifetime to just 8 days? Google's homepage is one of the most carefully optimised pages in the world, so I imagine there's a good reason. Edit: Please make sure you understand these points before answering: Nobody uses short max-age lifetimes to allow modifying a static asset in future. When you modify it, you just serve it at a different URL. So no, it's nothing to do with Google doodles. Think about it: even if Google didn't understand this basic trick of HTTP, 8 days still wouldn't be appropriate, as only those users who don't have the original logo cached would see the doodle on doodle-day – and then that group of users would go on seeing the doodle for the following 8 days after Google changed it back :) Web servers do not worry about "filling up" the caches of clients (or proxies). The client manages this by itself – when it hits its own storage limit, it just starts dropping the lowest priority items to make space for new items. The priority score is based on the question "How likely am I to benefit from having cached this URL?", which is nothing to do with what max-age value the server sent when the URL was originally requested; it's a heuristic based on the "frecency" of requests for that URL. The max-age simply lets the server set a cut-off point – the time at which the client is supposed to discard the item regardless of how often it's being re-used. It would be very nice and trusting of a downstream client/proxy to rely on all origin servers "holding back" from filling up their caches, but I don't think we live in that world ;)

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >