Search Results

Search found 335 results on 14 pages for 'mirroring'.

Page 11/14 | < Previous Page | 7 8 9 10 11 12 13 14  | Next Page >

  • Comix is an Awesome Comics Archive Viewer for Linux

    - by Asian Angel
    Do you have a terrific collection of comics in electronic form but need a great app to view them with? If you have a Linux system then we have the perfect app for you…Comix, the open source comic reading powerhouse. For our example we installed Comix on our Ubuntu 10.10 system. Just go to the Ubuntu Software Center and conduct a quick search. When you go to install Comix in the Ubuntu Software Center, make sure to scroll all the way to the bottom and select Unarchiver for .rar files. The listing appears as a “non-free version” for some reason, but displays as free once selected. Odd, but nothing to worry about in the end… Once Comix is installed you can find it in the Graphics Section of the Ubuntu Menu. Comix also comes with a nice set of options to let you customize the app to best suit those important comic reading needs. Here is a comprehensive list of the features this little comic reading powerhouse packs into one easy to use package: Fullscreen mode, double page mode, fit-to-screen mode, zooming and scrolling, rotation and mirroring, magnification lens, changeable image scaling quality, image enhancement, can read right-to-left to fit manga, etc., caching for faster page flipping, bookmarks support, customizable GUI, archive comments support, archive converter, thumbnail browser, standards compliant, available in multiple languages (English, Swedish, Simplified Chinese, Spanish, Brazilian Portuguese, & German), reads “JPEG, PNG, TIFF, GIF, BMP, ICO, XPM, & XBM” image formats, reads “ZIP & tar archives natively, RAR archives through the unrar program” runs on Linux, FreeBSD, NetBSD, and virtually any other UNIX-like OS, and more! Have fun reading those comics on your favorite Linux system! Interested in learning more about Comix? Then be certain to drop by the homepage! Comix Homepage Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar Reader for Android Updates; Now with Feed Widgets and More

    Read the article

  • You do not need a separate SQL Server license for a Standby or Passive server - this Microsoft White Paper explains all

    - by tonyrogerson
    If you were in any doubt at all that you need to license Standby / Passive Failover servers then the White Paper “Do Not Pay Too Much for Your Database Licensing” will settle those doubts. I’ve had debate before people thinking you can only have a single instance as a standby machine, that’s just wrong; it would mean you could have a scenario where you had a 2 node active/passive cluster with database mirroring and log shipping (a total of 4 SQL Server instances) – in that set up you only need to buy one physical license so long as the standby nodes have the same or less physical processors (cores are irrelevant). So next time your supplier suggests you need a license for your standby box tell them you don’t and educate them by pointing them to the white paper. For clarity I’ve copied the extract below from the White Paper. Extract from “Do Not Pay Too Much for Your Database Licensing” Standby Server Customers often implement standby server to make sure the application continues to function in case primary server fails. Standby server continuously receives updates from the primary server and will take over the role of primary server in case of failure in the primary server. Following are comparisons of how each vendor supports standby server licensing. SQL Server Customers does not need to license standby (or passive) server provided that the number of processors in the standby server is equal or less than those in the active server. Oracle DB Oracle requires customer to fully license both active and standby servers even though the standby server is essentially idle most of the time. IBM DB2 IBM licensing on standby server is quite complicated and is different for every editions of DB2. For Enterprise Edition, a minimum of 100 PVUs or 25 Authorized User is needed to license standby server.   The following graph compares prices based on a database application with two processors (dual-core) and 25 users with one standby server. [chart snipped]  Note   All prices are based on newest Intel Xeon Nehalem processor database pricing for purchases within the United States and are in United States dollars. Pricing is based on information available on vendor Web sites for Enterprise Edition. Microsoft SQL Server Enterprise Edition 25 users (CALs) x $164 / CAL + $8,592 / Server = $12,692 (no need to license standby server) Oracle Enterprise Edition (base license without options) Named User Plus minimum (25 Named Users Plus per Core) = 25 x 2 = 50 Named Users Plus x $950 / Named Users Plus x 2 servers = $95,000 IBM DB2 Enterprise Edition (base license without feature pack) Need to purchase 125 Authorized User (400 PVUs/100 PVUs = 4 X 25 = 100 Authorized User + 25 Authorized Users for standby server) = 125 Authorized Users x $1,040 / Authorized Users = $130,000  

    Read the article

  • The Social Business Thought Leaders - Ray Wang

    - by kellsey.ruppel
    It seems both consumers and businesses are at the peak of the social hype. Overwhelmed by social media channels, platforms, and processes both in their private and professional life, many early adopters are starting to feel the social fatigue. Mirroring what happened with email and web sites during the late 1990's - early 2000's, more and more managers are looking to move from ubiquitous social media tactics to the most appropriate business use case and processes. This step becomes even more important considering the year over year contraction in IT budgets and the consequent need to maximize return on every dollar spent in new technologies. Ray Wang, CEO and Principal Analyst at Constellation Research, suggests engagement through collaborative technologies both as a conceptual model and a transformational tool for enterprises to reap business value. Without participation - the reasoning goes - there is no value and good technology alone is not enough to guarantee employee and customer adoption. Enterprise gamification is a new lever to succeed with Social Business by directing a critical mass of participation towards desired outcomes. What kind of outcomes? A recent study from Constellation Research (see 2012 Q1 Gamification Early Adopters Best Practices) highlights how Marketing, Customer Service and HR are leading the pack with gamification in processes such as: Sustaining long term customer loyalty (76.4%) Improving response in campaign to lead (74.5%) Right channeling incidents for resolution in social media (67.3%) Growing the number service and support incidents resolved by the community (63.6%) Improving employee referral rates and effective recruiting (43.6%) Driving on-boarding success with new hires (20%) More than simply adding badges, points and leaderboards to existing processes, enterprise gamification should be holistically embedded into employee and customer experience to stimulate specific behaviors. According to Ray Wang this can be done at three core levels: Measurable actions. The behaviors we want to facilitate consist of granular actions (i.e likes, comments, posts, recommendations, etc) and more complex actions (i.e projects, initiatives, programmes) attributed to individuals, groups and/or external actors  Reputation. The reputation an individual has earned through his actions is a key factor in building motivation among others and it is determined by its identity, social standing status and competitiveness Incentives or the intrinsic and extrinsic rewards that motivate behaviors and drive actions Listen to Ray Wang's video-interview to learn more about the dynamics that are shaping the future of collaboration and how gamification can help organizations attain new levels of engagement.

    Read the article

  • New Options for MySQL High Availability

    - by Mat Keep
    Data is the currency of today’s web, mobile, social, enterprise and cloud applications. Ensuring data is always available is a top priority for any organization – minutes of downtime will result in significant loss of revenue and reputation. There is not a “one size fits all” approach to delivering High Availability (HA). Unique application attributes, business requirements, operational capabilities and legacy infrastructure can all influence HA technology selection. And then technology is only one element in delivering HA – “People and Processes” are just as critical as the technology itself. For this reason, MySQL Enterprise Edition is available supporting a range of HA solutions, fully certified and supported by Oracle. MySQL Enterprise HA is not some expensive add-on, but included within the core Enterprise Edition offering, along with the management tools, consulting and 24x7 support needed to deliver true HA. At the recent MySQL Connect conference, we announced new HA options for MySQL users running on both Linux and Solaris: - DRBD for MySQL - Oracle Solaris Clustering for MySQL DRBD (Distributed Replicated Block Device) is an open source Linux kernel module which leverages synchronous replication to deliver high availability database applications across local storage. DRBD synchronizes database changes by mirroring data from an active node to a standby node and supports automatic failover and recovery. Linux, DRBD, Corosync and Pacemaker, provide an integrated stack of mature and proven open source technologies. DRBD Stack: Providing Synchronous Replication for the MySQL Database with InnoDB Download the DRBD for MySQL whitepaper to learn more, including step-by-step instructions to install, configure and provision DRBD with MySQL Oracle Solaris Cluster provides high availability and load balancing to mission-critical applications and services in physical or virtualized environments. With Oracle Solaris Cluster, organizations have a scalable and flexible solution that is suited equally to small clusters in local datacenters or larger multi-site, multi-cluster deployments that are part of enterprise disaster recovery implementations. The Oracle Solaris Cluster MySQL agent integrates seamlessly with MySQL offering a selection of configuration options in the various Oracle Solaris Cluster topologies. Putting it All Together When you add MySQL Replication and MySQL Cluster into the HA mix, along with 3rd party solutions, users have extensive choice (and decisions to make) to deliver HA services built on MySQL To make the decision process simpler, we have also published a new MySQL HA Solutions Guide. Exploring beyond just the technology, the guide presents a methodology to select the best HA solution for your new web, cloud and mobile services, while also discussing the importance of people and process in ensuring service continuity. This is subject recently presented at Oracle Open World, and the slides are available here. Whatever your uptime requirements, you can be sure MySQL has an HA solution for your needs Please don't hesitate to let us know of your HA requirements in the comments section of this blog. You can also contact MySQL consulting to learn more about their HA Jumpstart offering which will help you scope out your scaling and HA requirements.

    Read the article

  • Picture rendered from above and below using an Orthographic camera do not match

    - by Roy T.
    I'm using an orthographic camera to render slices of a model (in order to voxelize it). I render each slice both from above and below in order to determine what is inside each slice. I am using an orthographic camera The model I render is a simple 'T' shape constructed from two cubes. The cubes have the same dimensions and have the same Y (height) coordinate. See figure 1 for a render of it in Blender. I render this model once directly from above and once directly from below. My expectation was that I would get exactly the same image (except for mirroring over the y-axis). However when I render using a very low resolution render target (25x25) the position (in pixels) of the 'T' is different when rendered from above as opposed to rendered from below. See figure 2 and 3. The pink blocks are not part of the original rendering but I've added them so you can easily count/see the differences. Figure 2: the T rendered from above Figure 3: the T rendered from below This is probably due to what I've read about pixel and texel coordinates which might be biased to the top-left as seen from the camera. Since I'm using the same 'up' vector for both of my camera's my bias only shows on the x-axis. I've tried to change the position of the camera and it's look-at by, what I thought, should be half a pixel. I've tried both shifting a single camera and shifting both cameras and while I see some effect I am not able to get a pixel-by-pixel perfect copy from both camera's. Here I initialize the camera and compute, what I believe to be, half pixel. boundsDimX and boundsDimZ is a slightly enlarged bounding box around the model which I also use as the width and height of the view volume of the orthographic camera. Matrix projection = Matrix.CreateOrthographic(boundsDimX, boundsDimZ, 0.5f, sliceHeight + 0.5f); Vector3 halfPixel = new Vector3(boundsDimX / (float)renderTarget.Width, 0, boundsDimY / (float)renderTarget.Height) * 0.5f; This is the code where I set the camera position and camera look ats // Position camera if (downwards) { float cameraHeight = bounds.Max.Y + 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, // possibly adjust by half a pixel? cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight - 1.0f, cameraPosition.Z); } else { float cameraHeight = bounds.Max.Y - 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight + 1.0f, cameraPosition.Z); } Main Question Now you've seen all the problems and code you can guess it. My main question is. How do I align both camera's so that they each render exactly the same image (mirrored along the Y axis)? Figure 1 the original model rendered in blender

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • Will you share your SQL Server configuration?

    - by Bill Graziano
    I regularly visit client sites and review their SQL Server configurations.  I come across all kinds of strange settings.  I’ve been thinking about a way to aggregate people’s configurations and see what’s common and what’s unique.  I used to do that with polls on SQLTeam.com.  I think we can find out more interesting things if we look at combinations of settings in relation to size and volume. I’ve been working on an application for another project that is similar.  It will be fairly easy to use that code for this.  I can have something up and running in a few days – if people are interested in it.  I admit that I often come up with ideas that just don’t make sense.  This may be one of them.  One of your biggest concerns has be how secure your data is.  My solution is not to store anything identifying.  The instance name and database names can both be “anonymized” and I don’t store the machine name or IP address or anything to do with logins. Some of the questions I’m curious about are: At what size database does the Enterprise Edition become prevalent? Given the total size of the databases how much RAM is common? How many people have multiple data files?  At what size does that become prevalent? How common is database mirroring?  Replication?  Log shipping? How common is full recovery mode?  At what data size does it become prevalent? I think those are all questions that are easy to answer -- with the right data.  The big question is whether or not people will share their SQL Server configurations.  I understand that organizations in regulated or high security environments can’t participate.  But I think that leaves many, many people that can.  Are you willing to share your configuration and learn about others?  I have a simple sign up form here.  It’s actually a mailing list signup that also captures your edition, number of servers and largest database.  The list will only be used for this project.  Is your SQL Server is configured correctly?  Do you wonder what the next step is as your data grows?  Take a second and sign up.

    Read the article

  • How can I fix broken i915 drivers for Intel GPUs?

    - by Alen Mujezinovic
    I've got troubles getting the i915 drivers to work correctly on my laptop (HP Pavilion DM4 2101ea). Specifically, the laptop screen goes black and stays black after the splash graphic when booting both from USB key and from harddrive. To get anything on to the display after the splash screen I have to boot either with acpi=off nomodeset i915.modeset=0 I'd rather not turn ACPI off because I like my fans spinning and nomodeset is a bit overkill, so for now I'm booting with i915.modeset=0. Unfortunately, this turns off KMS and my current maximum resolution on the laptop screen is fixed to 1024x768 instead of its real capability. When not setting any of the above boot flags and I plug in an external monitor, the external monitor works fine. When booting with the flags, the external monitor works fine too, but can only do 1024x768 and can't do anything else than mirroring the laptop display. I did upgrade the i915 drivers from 2.17 that ship with Precise to 2.19 which are the most recent ones but without luck of getting anything to display. Here's my lspci output: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b5) 00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b5) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 01:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) 02:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5116 PCI Express Card Reader (rev 01) 08:00.0 Ethernet controller: Atheros Communications Inc. AR8151 v2.0 Gigabit Ethernet (rev c0) Here's lshw -C video *-display UNCLAIMED description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list configuration: latency=0 resources: memory:c0000000-c03fffff memory:b0000000-bfffffff ioport:4000(size=64) Both outputs are generated after booting with i915.modeset=0. Here's a complete Xorg.log file from a boot into a black screen: https://gist.github.com/479ce06454e47d6123e1 The graphics card is a Intel HD 3000 integrated GPU. I've never had problems with Intel hardware on Ubuntu before so this is very surprising. If you could provide a method to make i915 work, suggest alternative drivers a way to boot with i915.modeset=0 but higher resolutions and KMS on or explain what is happening and how to fix it I'll give you an answer badge. :) Thanks

    Read the article

  • How do you make a bullet ricochet off a vertical wall?

    - by Bagofsheep
    First things first. I am using C# with XNA. My game is top-down and the player can shoot bullets. I've managed to get the bullets to ricochet correctly off horizontal walls. Yet, despite using similar methods (e.g. http://stackoverflow.com/questions/3203952/mirroring-an-angle) and reading other answered questions about this subject I have not been able to get the bullets to ricochet off a vertical wall correctly. Any method I've tried has failed and sometimes made ricocheting off a horizontal wall buggy. Here is the collision code that calls the ricochet method: //Loop through returned tile rectangles from quad tree to test for wall collision. If a collision occurs perform collision logic. for (int r = 0; r < returnObjects.Count; r++) if (Bullets[i].BoundingRectangle.Intersects(returnObjects[r])) Bullets[i].doCollision(returnObjects[r]); Now here is the code for the doCollision method. public void doCollision(Rectangle surface) { if (Ricochet) doRicochet(surface); else Trash = true; } Finally, here is the code for the doRicochet method. public void doRicochet(Rectangle surface) { if (Position.X > surface.Left && Position.X < surface.Right) { //Mirror the bullet's angle. Rotation = -1 * Rotation; //Moves the bullet in the direction of its rotation by given amount. moveFaceDirection(Sprite.Width * BulletScale.X); } else if (Position.Y > surface.Top && Position.Y < surface.Bottom) { } } Since I am only dealing with vertical and horizontal walls at the moment, the if statements simply determine if the object is colliding from the right or left, or from the top or bottom. If the object's X position is within the boundaries of the tile's X boundaries (left and right sides), it must be colliding from the top, and vice verse. As you can see, the else if statement is empty and is where the correct code needs to go.

    Read the article

  • Oracle Data Protection: How Do You Measure Up? - Part 1

    - by tichien
    This is the first installment in a blog series, which examines the results of a recent database protection survey conducted by Database Trends and Applications (DBTA) Magazine. All Oracle IT professionals know that a sound, well-tested backup and recovery strategy plays a foundational role in protecting their Oracle database investments, which in many cases, represent the lifeblood of business operations. But just how common are the data protection strategies used and the challenges faced across various enterprises? In January 2014, Database Trends and Applications Magazine (DBTA), in partnership with Oracle, released the results of its “Oracle Database Management and Data Protection Survey”. Two hundred Oracle IT professionals were interviewed on various aspects of their database backup and recovery strategies, in order to identify the top organizational and operational challenges for protecting Oracle assets. Here are some of the key findings from the survey: The majority of respondents manage backups for tens to hundreds of databases, representing total data volume of 5 to 50TB (14% manage 50 to 200 TB and some up to 5 PB or more). About half of the respondents (48%) use HA technologies such as RAC, Data Guard, or storage mirroring, however these technologies are deployed on only 25% of their databases (or less). This indicates that backups are still the predominant method for database protection among enterprises. Weekly full and daily incremental backups to disk were the most popular strategy, used by 27% of respondents, followed by daily full backups, which are used by 17%. Interestingly, over half of the respondents reported that 10% or less of their databases undergo regular backup testing.  A few key backup and recovery challenges resonated across many of the respondents: Poor performance and impact on productivity (see Figure 1) 38% of respondents indicated that backups are too slow, resulting in prolonged backup windows. In a similar vein, 23% complained that backups degrade the performance of production systems. Lack of continuous protection (see Figure 2) 35% revealed that less than 5% of Oracle data is protected in real-time.  Management complexity 25% stated that recovery operations are too complex. (see Figure 1)  31% reported that backups need constant management. (see Figure 1) 45% changed their backup tools as a result of growing data volumes, while 29% changed tools due to the complexity of the tools themselves. Figure 1: Current Challenges with Database Backup and Recovery Figure 2: Percentage of Organization’s Data Backed Up in Real-Time or Near Real-Time In future blogs, we will discuss each of these challenges in more detail and bring insight into how the backup technology industry has attempted to resolve them.

    Read the article

  • RoboCopy fails with "the specified network name is no longer available"

    - by Justin Scott
    We have a scheduled task that runs robocopy periodically to mirror a rather large folder structure from one server to another (thousands of folders, 100,000+ files, 50+ GB in size). There is a share on the receiving server where the mirror gets stored. We're running the task from the origin server connecting out to the share on the receiving end. Both servers run Windows Server 2003 and are connected to the same network switch (100Mbps). The process will sometimes complete all the way through without error. More often than not, however, at some point during the process (seems random as to where), robocopy will fail with the error The specified network name is no longer available. It will wait 30 seconds and try the file again and eventually give up after a number of retries. Process will repeat at the next schedule interval and may complete... or not. When this occurs I am not able to access the share at all on the destination server from anywhere on the network for up to 30 minutes. There is nothing else on the network using this share. My question is what does this error mean specifically? Why is the share "dropping off" and becoming inaccessible? Is there a way to prevent it and get the file mirroring to be more stable?

    Read the article

  • ZFS + FreeBSD + virtualbox

    - by John
    Hi, I'm configuring a FreeBSD server hosting virtualbox serving half dozen mission critical busy mail servers. I just learned ZFS, I'm quite attracted, but have a few questions: what is the CPU overhead of ZFS? I googled and found little (or no) benchmark for that. from what I learned, when ZFS updates files, it keeps the old file as snapshot, and write the updated part for the new version. However that would mean for each snapshot it keeps that require significant storage overhead. How much is this storage overhead? For example, suppose I have 2TB usable space, how much space can actually be used for the latest version of files one year later? is FreeBSD with ZFS hosting virtualbox serving half dozen busy guest mission critical mail servers a reasonable combination? Anything particular to be careful with? And can I still choose ZFS for the guest OSs? This is because I may build another identical such box for redundancy, and will need to do some mirroring between each pair of the guest systems across the boxes. I'm trying to configure a Dell R710 for this. From what I learned, I shouldn't choose any RAID at all, is that true? In that case, are the drives still arrive hot swappable? this may sounds a bit pathetic, but since I have no experience with ZFS at all, and this is a mission critical server, so just ask just in case: I'm choosing twin Intel L5630 processors, and 6 x 600GB 15K RPM Serial-Attach SCSI drives. If I need more space in the future, I would just hot swap some drivers with larger capacity to expand the storage. There is no problem with these, right?

    Read the article

  • Check for updates of a specific Debian package list

    - by Erwan Queffélec
    The setup I run a Debian Squeeze host that I use to build a multilanguage project (python, java, php...) and generate custom packages (debian and RPM) automatically (through jenkins) The problem The target distributions of those Debian packages are Etch, Lenny and Squeeze. But our project has some native dependencies that are available only through the DebianRelease + 1 repository (i.e Lenny + 1 == Squeeze, Squeeze + 1 == Wheezy). We for example, need the jetty packages from Squeeze in Lenny, and the cyrus-imapd-2.4 packages from Wheezy in Squeeze. Some additional info : Some package we can simply 'backport by hand' by mirroring the DebianRelease + 1 packages to our own repositories. For instance, the jetty package from Squeeze will run fine on Lenny because it doesn't need an s**tload of additional dependencies However we do need to rebuild some packages. For instance, cyrus-imapd-2.4 from Wheezy has a lot of unsatisfied dependencies on Squeeze. So we need to rebuild it in Squeeze and then upload it to our repo. The question I need to have a simple way of knowing if they are any updates on those extra packages (both "normal" and "security" updates). I could write a simple script that runs weekly, get some parameters from a file, and generate an update report. Let's say the file looks like this: jetty:squeeze cyrus-imapd-2.4:wheezy The script should run as normal user not to mess up the system apt configuration and issue the appropriate commands to generate that report. Does Debian has some built-in apt-* commands/options dedicated to that kind of problem I could use to write this script ? If not, can someone think of another clean solution to achieve what I need ?

    Read the article

  • SAN performance issues storing SQL Server tempdb on a SAN that's being backed up

    - by user42724
    I'm afraid I don't know much about SAN's so please forgive my lack of detail or technical terms. As a developer I've just completed and put on an existing production system a new application but it would appear to have tipped the scales regarding the performance of the backups being taken from the SAN. As I understand it there's a mirror of the SAN being taken usually constantly at the block-level. However, there seem to be so many new writes to the disk that the SAN mirroring/backup process can no longer keep up. I believe I've narrowed this down to SQL Servers tempdb which exists on a drive that contributes the largest portion of the problem! In fact I think tempdb has be contributing the largest portion of the issues all along regardless of my application! My question therefore is whether the tempdb should ever be mirrored or backed on the SAN and whether anyone else has gone through this sort of pain already? I'm wondering whether it's a best practise to make sure that tempdb is never mirrored on a SAN simply because any writes to it don't need to be saved. This also raises a slightly connected question - is it better to rely on SQL Servers built-in database backups tools (DB in full-recovery mode with full/differential and transaction log backups) or, as is the case with our application, SQL server is in simple recovery mode and never backed up since the SAN is mirrored and backed up? Many thanks

    Read the article

  • Configuration for a two machine ESXi cluster using VSA to present local storage to VMs

    - by MDMarra
    I'm designing a little vSphere 5 cluster for one of our remote sites. We have some IBM x3650s that have 6x 300GB 10K RPM drives in them, along with dual quad core CPUs and 24GB RAM. Because we use HP P4500 G2s at our primary site, we have licenses available for HP P4000 VSAs. I thought that this would be the perfect opportunity to use them. Below is a basic drawing of what I want to accomplish: I want to run a P4000 VSA on each server and run them in a Network RAID-10 (Lefthand speak for network mirroring, think of it as RAID 1 across nodes or as an active/active storage cluster). I will then present this storage to guests that will run on this mini-cluster. It will be managed by a vCenter Server on our main site. All connections will be GbE with two dedicated to storage. Management and Data will share a pair of connections, since I don't expect there to be high load. These servers are just there to provide directory services, dhcp, printing, etc. Does anyone see anything potentially wrong with this approach? Is this the best way to do this without adding additional dedicated storage heads? Are there any pitfalls in this design, besides the lack of dedicated Data/Mgmt interfaces?

    Read the article

  • Backup, Migrate or Clone Failing CentOS 4 (LVM)

    - by Hegelworm
    Hello there, I've been running a BlueQuartz CentOS 4 system (Nuonce.net distro) for a few years now and although the hard drive (Deskstar) has always been a bit noisy, on a few recent occasions I've heard it having trouble spinning up. Basically, I want to clone this drive to a similar sized one (80 Gig). I've spent many hours reading upon dd, dd_rescue, rsync, clonezilla and LVM mirroring yet the sheer number of options and nightmarish accounts has left me frozen - unable to make an informed decision as to how to start. I've made a few attempts. dd failed after about 2 hours, as, although the drives appeared to be identical on the surface (ATA Seagate Barracudas, Thai not Chinese), the destination drive is slightly smaller. My most recent attempt involved using a Debian CD to format the new drive and then rsync-ing everything over and editing the new drive's grub and fstab to reflect the changes. No joy here either as I hadn't chosen LVM when partitioning the destination drive and it wouldn't boot. As you can probably tell, I'm out of my depth here and a panic-invoking mixture of caution and frustration has prompted me to sign up here. The server itself, although not strictly a production environment, has a very specific installation of Festival, LAME and ffMpeg and provides the back-end for a Text-to-Speech jQuery plugin that I've built over the last 2 years. I'm also planning to rebuild the whole TTS system on Debian as the existing CentOS system still has PHP4 etc. For now though, I'd really like to just shift everything over to a new drive. As this is my first post, please feel free to lay any house rules on me that I might've overlooked; I've been hovering around StackOverflow for a while now but have only just signed up. Many thanks.

    Read the article

  • Why does RoboCopy create a hidden system folder?

    - by Svish
    I thought I would try out RoboCopy for mirroring the contents of a folder to another harddrive. And seems like it worked. But, for some reason, to see the destination folder I have to both enable Show hidden files, folders and drives and disable Hide protected operating system files. Why is this? Both the source and destination folder was initially both visible and normal directories. When I open up the properties for that destination folder, the Hidden attribute is even disabled. What is going on here? Is it because I ran it in an administrator command prompt? Or is it an issue with my choice of modifiers? Or does robocopy really just work this way? robocopy E: I:\E /COPYALL /E /R:0 /MIR /B /ETA Update: Tried to copy another drive to another folder, and I got the same thing happening there. But when I try to just copy a folder to a different folder, then the destination folder stays normal. Could it be because I copy a drive? If so, how can I prevent this from happening? Cause I really do want to copy the whole drive...

    Read the article

  • SQL 2000 Backup/Export Process - Can't find SQL 2000 Enterprise Manager, Can't use Mgmt Studio Expre

    - by 1nsane
    I need to make a backup of a client's SQL 2000 database, however there are a few issues preventing me from doing so using the traditional methods. I've tried using SQL Management Studio Express, but the host doesn't give sufficient privileges to create a backup and I'm getting some strange error messages. I've also tried doing the "Generate Scripts" to recreate the schema, then using the DTS Wizard to migrate the data, but the IDs set up with the identity specification property are not consistent with the live database once copied over. This results in some foreign key breakage... If I remember correctly, I was able to use Microsoft SQL 2000 Enterprise Manager to perform the task before, but I can't find this anywhere... it seems Microsoft has pulled most SQL Server 2000 stuff from their site. Does anyone know where I can find a copy of Enterprise Manager (or a trial of SQL Server 2000, which I believe comes with the component)? Or conversely, does anyone know of any other tools (preferably non-commercial) that are capable of mirroring remote SQL Server 2000 DBs? Thanks!

    Read the article

  • What determines what resolutions a laptop is willing to output over VGA?

    - by Joshua McKinnon
    I'm responsible for several conference rooms and have setup 1080p projectors and I provide both HDMI and VGA connectivity. HDMI for DisplayPort and Mini-DisplayPort, and VGA as a fallback, universal option. Contrary to what I expected, people seem to have much more trouble with the HDMI than VGA, so VGA gets used a lot more than you'd think (even as most workstation laptops made in the last 3-4 years have DisplayPort or Mini-DisplayPort...). Also to my surprise, VGA outputs over 1080p on a 50ft cable run with very minimal degradation on certain laptops - other laptops just don't offer 1080p as a resolution choice and top out at 1600x1200 or something else. Specific example: a ThinkPad W530 will do 1080p, a W520 won't, over VGA. (both do 1080p over displayport/mini-DP) What determines what resolutions a laptop is willing to output over VGA? I'm thinking this will come down to either a video driver that says it supports only certain resolutions for output, or limitations of the RAMDAC (which wouldn't be in play, at least DAC wise, on a digital output, but WOULD on VGA, an analog output). The basic reason for the question is that I noticed, say, a ThinkPad W520 with 1080p built in display, will output 1080p fine over DisplayPort to a 1080p projector, but will cap out at 1600x1200 (practically the same pixel count, just a little shy) on VGA. Now, this wouldn't be surprising at all except SOME laptops have no issue outputting 1080p over VGA, even with lower native resolutions. Why do I care? Well if there's some way I could enable it... for situations where my users end up using VGA anyway, it's preferable for display mirroring if they can output their laptop's native resolution, which, you guessed it, is very often 1080p on 15" models. DISCLAIMER: This is primarily a curiosity, I'm not claiming 1080p over VGA is ideal by any means, but hey, if it works. I've seen HDMI start artifacting more over same-length, same gauge cabling (up to 50' run in certain rooms). If you think this is better suited to SuperUser, please move it, but this is framed from an IT standpoint of something that affects a real pool of users in a multiple conference room, 50+ deployed laptop scenario.

    Read the article

  • Advice needed: warm backup solution for SQL Server 2008 Express?

    - by Mikey Cee
    What are my options for achieving a warm backup server for a SQL Server Express instance running a single database? Sitting beside my production SQL Server 2008 Express box I have a second physical box currently doing nothing. I want to use this second box as a warm backup server by somehow replicating my production database in near real time (a little bit of data loss is acceptable). The database is very small and resources are utilized very lightly. In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead. Although Express doesn't support log shipping natively, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals. Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do? Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.

    Read the article

  • How to achieve the following RTO & RPO with logshipping only using SQL Server?

    - by Jimmy Chandra
    Trying to come up with viable backup restore & logshipping solution for achieving the following: 15 minutes Recovery Point Objective (no more than 15 minutes data loss at any time) 5 minutes Recovery Time Objective (must be able to get the db up and running back by 5 minutes) Considering using logshipping only (which I think is kind of pushing it, but I want to know if anyone else know how to achieve this). Some other info for consideration: Using 40 Gbit / sec fiber channel between the primary and disaster recovery (DRC) sites The sites are about 600 km apart. At close of business, the amount of data generated is predicted to be about 150 MB/sec. Log backup is planned for every 5 min. Doing some rough calculation I came up w/ the following numbers: 40 Gbit / sec = 5 MB / sec @ 100% network efficiency. 5 MB / sec = 300 MB / min. @ 300 MB / min, the total amount of data that can be transfer considering the 5min RTO is about 1.5GB, but that will left no time for the actual backup and restore, so if we cut it down to 3min logshipping time, which equals to ~900 MB over 3 minutes at 100% network efficiency, that will left about 1 min backup time and 1 minute restore time. Currently don't have any information if the system being used is capable of restoring 900 MB in 1 min, but assume it can. for COB scenario... 150 MB/sec, and considering the 3 min logshipping time, which should equal to about 27 GB of data over 3 mins...??? I think this is where the SLA will break... since there is no way to transfer 27 GB of data over a 40Gbit/sec line in 3 min. Can I get someone else opinion? I am thinking database mirroring might be a better answer for this...

    Read the article

  • graphics performance better on battery?

    - by Scott Beeson
    Anyone have any idea why my laptop would perform (considerably) better while on battery than while plugged in? It's a Dell Latitude E6420 with Windows 8 Pro. I tried mirroring all the settings in the selected power plan from "On battery" to "Plugged In" and that didn't help. I then just restored the defaults for all power plans (balanced and high performance). I'm still seeing the same results. The best example where it is most noticeable (don't laugh) is Sim City Social in Chrome. I'm probably seeing a performance increase of 5x on battery versus plugged in. This is easily reproducible too. I'm very confused. Could it be caused by dust? The laptop isn't that old and there is no visible dust. I'm not going to take it apart to check the insides as it's a corporate laptop. Could it be overheating? Battery Sim City Social: 68 degrees max Civ V: 77 degrees max Charger Sim City Social: 68 Civ V: did not test See answer below... I'm retarded

    Read the article

  • Configuring three monitors with two Radeon X1600/X1650 graphics cards under Ubuntu

    - by cpm
    I have three SyncMaster 932a monitors I want to use with two Radeon X1600/X1650 cards under Linux. I am running X.org X Server 1.6.0, as provided by Ubuntu's Wubi installer. After turning off mirroring, I ended up with this xorg.conf: Section "Monitor" Identifier "Configured Monitor" EndSection Section "Screen" Identifier "Default Screen" Monitor "Configured Monitor" Device "Configured Video Device" SubSection "Display" Virtual 2560 1024 EndSubSection EndSection Section "Device" Identifier "Configured Video Device" EndSection The left monitor had a menu bar and a task bar, the center monitor was just desktop, and windows would maximize to the current monitor. The third monitor and second graphics card weren't being used at all. Then I changed my configuration to manually specify each card with their PCI bus: Section "ServerLayout" Identifier "TheLayout" Screen 0 "Radeon Screen 1" Screen 1 "Radeon Screen 2" RightOf "Radeon Screen 1" EndSection Section "Screen" Identifier "Radeon Screen 1" Monitor "Configured Monitor" Device "Radeon the First" SubSection "Display" Virtual 2560 1024 EndSubSection EndSection Section "Screen" Identifier "Radeon Screen 2" Monitor "Configured Monitor" Device "Radeon the Second" EndSection Section "Device" Identifier "Radeon the First" Driver "radeon" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "Radeon the Second" Driver "radeon" BusID "PCI:2:0:0" EndSection Section "Monitor" Identifier "Configured Monitor" EndSection Now both the left and right monitors have task bars and menu bars. Windows cannot be dragged from the first two monitors to the third monitor. Also, maximizing in the left or center window fills both monitors. I also tried adding Option "Xinerama" "true" to the ServerLayout section. X11 wasn't able to start up. I want to: Allow moving windows along all three monitors. Maximizing only fills the current monitor. Either have menu/task bars on only the left monitor or all three monitors How can I make this possible?

    Read the article

  • Virtual machines with failover setup

    - by kimmmo
    We have three servers and our plan is to run a number of virtual machines on them in such manner, that if one of the nodes blow up, we can either quickly or seamlessly get a spare running on another node. In addition to the normal networking, they're interconnected via dual 10Gbit NIC's, so networked raid/mirroring shouldn't be a problem. The guest VM's are mostly going to be running text mode linux, but of course it wouldn't hurt to be able to spin up a non-mission critical windows guest for running Visual Studio or checking IE compatibility of a web app. We've spent some time trying to get some magical cloud setup running using Stackops and Crowbar but it started to look like they were offering way too much and were too complicated for our needs. The next candidate, I think, is Ubuntu 11.04 server + KVM + Ganeti + Drbd, unless you can come up with a suggestion for a better solution that we have missed. Requirements: Installation should be simple or at least understandable without being in the dev team A browser interface for creating and managing VM's is a nice bonus Single node's hardware failure should cause minimal downtime for VM's that were running on that node Adding more nodes should be possible without shutting down the VM's.

    Read the article

  • Backing up default windows installation with dd from linux running on another partition - is this fe

    - by Marek
    I am preparing to reinstall my system. I am thinking about creating a multi boot with a linux distro+Windows 7 to choose from when starting up. I would love to be able to skip all the hassle of reinstalling Windows and all programs when it starts becoming too slow in the future, thus I would like to mirror my fresh Windows system partition with some programs preinstalled. I am thinking about installing Ubuntu, making a partition for windows, installing windows with the basic environment (Visual Studio, Office, etc.) then booting into Linux and making an image of the windows partition with dd. I am not familiar with linux at all so I am a little afraid something may go wrong along the way. Is it possible to do it this way? Will I be able to partition my existing disk for multi boot easily after I install Ubuntu? Will I be able to recover the Windows partition easily using dd when I will need to re-create a fresh windows partition in the future? What other (better) approach can you recommend to achieve the goal of easy disk mirroring (for free)?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14  | Next Page >