Search Results

Search found 26912 results on 1077 pages for 'default programs'.

Page 668/1077 | < Previous Page | 664 665 666 667 668 669 670 671 672 673 674 675  | Next Page >

  • SQL Server Database Settings

    - by rbishop
    For those using Data Relationship Management on Oracle DB this does not apply, but for those using Microsoft SQL Server it is highly recommended that you run with Snapshot Isolation Mode. The Data Governance module will not function correctly without this mode enabled. All new Data Relationship Management repositories are created with this mode enabled by default. This mode makes SQL Server (2005+) behave more like Oracle DB where readers simply see older versions of rows while a write is in progress, instead of readers being blocked by locks while a write takes place. Many common sources of deadlocks are eliminated. For example, if one user starts a 5 minute transaction updating half the rows in a table, without snapshot isolation everyone else reading the table will be blocked waiting. With snapshot isolation, they will see the rows as they were before the write transaction started. Conversely, if the readers had started first, the writer won't be stuck waiting for them to finish reading... the writes can begin immediately without affecting the current transactions. To make this change, make sure no one is using the target database (eg: put it into single-user mode), then run these commands: ALTER DATABASE [DB] SET ALLOW_SNAPSHOT_ISOLATION ONALTER DATABASE [DB] SET READ_COMMITTED_SNAPSHOT ON Please make sure you coordinate with your DBA team to ensure tempdb is appropriately setup to support snapshot isolation mode, as the extra row versions are stored in tempdb until the transactions are committed. Let me take this opportunity to extremely strongly highly recommend that you use solid state storage for your databases with appropriate iSCSI, FiberChannel, or SAN bandwidth. The performance gains are significant and there is no excuse for not using 100% solid state storage in 2013. Actually unless you need to store petabytes of archival data, there is no excuse for using hard drives in any systems, whether laptops, desktops, application servers, or database servers. The productivity benefits alone are tremendous, not to mention power consumption, heat, etc.

    Read the article

  • How to Implement Project Type "Copy", "Move", "Rename", and "Delete"

    - by Geertjan
    You've followed the NetBeans Project Type Tutorial and now you'd like to let the user copy, move, rename, and delete the projects conforming to your project type. When they right-click a project, they should see the relevant menu items and those menu items should provide dialogs for user interaction, followed by event handling code to deal with the current operation. Right now, at the end of the tutorial, the "Copy" and "Delete" menu items are present but disabled, while the "Move" and "Rename" menu items are absent: The NetBeans Project API provides a built-in mechanism out of the box that you can leverage for project-level "Copy", "Move", "Rename", and "Delete" actions. All the functionality is there for you to use, while all that you need to do is a bit of enablement and configuration, which is described below. To get started, read the following from the NetBeans Project API: http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/ActionProvider.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/CopyOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/MoveOrRenameOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/DeleteOperationImplementation.html Now, let's do some work. For each of the menu items we're interested in, we need to do the following: Provide enablement and invocation handling in an ActionProvider implementation. Provide appropriate OperationImplementation classes. Add the new classes to the Project Lookup. Make the Actions visible on the Project Node. Run the application and verify the Actions work as you'd like. Here we go: Create an ActionProvider. Here you specify the Actions that should be supported, the conditions under which they should be enabled, and what should happen when they're invoked, using lots of default code that lets you reuse the functionality provided by the NetBeans Project API: class CustomerActionProvider implements ActionProvider { @Override public String[] getSupportedActions() { return new String[]{ ActionProvider.COMMAND_RENAME, ActionProvider.COMMAND_MOVE, ActionProvider.COMMAND_COPY, ActionProvider.COMMAND_DELETE }; } @Override public void invokeAction(String string, Lookup lkp) throws IllegalArgumentException { if (string.equalsIgnoreCase(ActionProvider.COMMAND_RENAME)) { DefaultProjectOperations.performDefaultRenameOperation( CustomerProject.this, ""); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_MOVE)) { DefaultProjectOperations.performDefaultMoveOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_COPY)) { DefaultProjectOperations.performDefaultCopyOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_DELETE)) { DefaultProjectOperations.performDefaultDeleteOperation( CustomerProject.this); } } @Override public boolean isActionEnabled(String command, Lookup lookup) throws IllegalArgumentException { if ((command.equals(ActionProvider.COMMAND_RENAME))) { return true; } else if ((command.equals(ActionProvider.COMMAND_MOVE))) { return true; } else if ((command.equals(ActionProvider.COMMAND_COPY))) { return true; } else if ((command.equals(ActionProvider.COMMAND_DELETE))) { return true; } return false; } } Importantly, to round off this step, add "new CustomerActionProvider()" to the "getLookup" method of the project. If you were to run the application right now, all the Actions we're interested in would be enabled (if they are visible, as described in step 4 below) but when you invoke any of them you'd get an error message because each of the DefaultProjectOperations above looks in the Lookup of the Project for the presence of an implementation of a class for handling the operation. That's what we're going to do in the next step. Provide Implementations of Project Operations. For each of our operations, the NetBeans Project API lets you implement classes to handle the operation. The dialogs for interacting with the project are provided by the NetBeans project system, but what happens with the folders and files during the operation can be influenced via the operations. Below are the simplest possible implementations, i.e., here we assume we want nothing special to happen. Each of the below needs to be in the Lookup of the Project in order for the operation invocation to succeed. private final class CustomerProjectMoveOrRenameOperation implements MoveOrRenameOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyRenaming() throws IOException { } @Override public void notifyRenamed(String nueName) throws IOException { } @Override public void notifyMoving() throws IOException { } @Override public void notifyMoved(Project original, File originalPath, String nueName) throws IOException { } } private final class CustomerProjectCopyOperation implements CopyOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyCopying() throws IOException { } @Override public void notifyCopied(Project prjct, File file, String string) throws IOException { } } private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Also make sure to put the above methods into the Project Lookup. Check the Lookup of the Project. The "getLookup()" method of the project should now include the classes you created above, as shown in bold below: @Override public Lookup getLookup() { if (lkp == null) { lkp = Lookups.fixed(new Object[]{ this, new Info(), new CustomerProjectLogicalView(this), new CustomerCustomizerProvider(this), new CustomerActionProvider(), new CustomerProjectMoveOrRenameOperation(), new CustomerProjectCopyOperation(), new CustomerProjectDeleteOperation(), new ReportsSubprojectProvider(this), }); } return lkp; } Make Actions Visible on the Project Node. The NetBeans Project API gives you a number of CommonProjectActions, including for the actions we're dealing with. Make sure the items in bold below are in the "getActions" method of the project node: @Override public Action[] getActions(boolean arg0) { return new Action[]{ CommonProjectActions.newFileAction(), CommonProjectActions.copyProjectAction(), CommonProjectActions.moveProjectAction(), CommonProjectActions.renameProjectAction(), CommonProjectActions.deleteProjectAction(), CommonProjectActions.customizeProjectAction(), CommonProjectActions.closeProjectAction() }; } Run the Application. When you run the application, you should see this: Let's now try out the various actions: Copy. When you invoke the Copy action, you'll see the dialog below. Provide a new project name and location and then the copy action is performed when the Copy button is clicked below: The message you see above, in red, might not be relevant to your project type. When you right-click the application and choose Branding, you can find the string in the Resource Bundles tab, as shown below: However, note that the message will be shown in red, no matter what the text is, hence you can really only put something like a warning message there. If you have no text at all, it will also look odd.If the project has subprojects, the copy operation will not automatically copy the subprojects. Take a look here and here for similar more complex scenarios. Move. When you invoke the Move action, the dialog below is shown: Rename. The Rename Project dialog below is shown when you invoke the Rename action: I tried it and both the display name and the folder on disk are changed. Delete. When you invoke the Delete action, you'll see this dialog: The checkbox is not checkable, in the default scenario, and when the dialog above is confirmed, the project is simply closed, i.e., the node hierarchy is removed from the application. However, if you truly want to let the user delete the project on disk, pass the Project to the DeleteOperationImplementation and then add the children of the Project you want to delete to the getDataFiles method: private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { private final CustomerProject project; private CustomerProjectDeleteOperation(CustomerProject project) { this.project = project; } @Override public List<FileObject> getDataFiles() { List<FileObject> files = new ArrayList<FileObject>(); FileObject[] projectChildren = project.getProjectDirectory().getChildren(); for (FileObject fileObject : projectChildren) { addFile(project.getProjectDirectory(), fileObject.getNameExt(), files); } return files; } private void addFile(FileObject projectDirectory, String fileName, List<FileObject> result) { FileObject file = projectDirectory.getFileObject(fileName); if (file != null) { result.add(file); } } @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Now the user will be able to check the checkbox, causing the method above to be called in the DeleteOperationImplementation: Hope this answers some questions or at least gets the discussion started. Before asking questions about this topic, please take the steps above and only then attempt to apply them to your own scenario. Useful implementations to look at: http://kickjava.com/src/org/netbeans/modules/j2ee/clientproject/AppClientProjectOperations.java.htm https://kenai.com/projects/nbandroid/sources/mercurial/content/project/src/org/netbeans/modules/android/project/AndroidProjectOperations.java

    Read the article

  • Can't run minecraft on ubuntu 12.04 lts [duplicate]

    - by user170011
    This question already has an answer here: How to correctly install and troubleshoot Minecraft (Client) 3 answers I was trying to run minecraft on my laptop with ubuntu 12.04 lts 64 bit. I have a lenovo ideapad p580 with 7.7 Gb and an Intel® Core™ i7-3520M CPU @ 2.90GHz × 4 processor. Under the graphics section of the system overview in ubuntu it says I have none installed. My computer comes with and nvidia geforce graphics card but it isnt recognized. When I start minecraft I get this crash report. ---- Minecraft Crash Report ---- // Shall we play a game? Time: 24/06/13 7:23 PM Description: Failed to start game org.lwjgl.LWJGLException: Could not init GLX at org.lwjgl.opengl.LinuxDisplayPeerInfo.initDefaultPeerInfo(Native Method) at org.lwjgl.opengl.LinuxDisplayPeerInfo.<init>(LinuxDisplayPeerInfo.java:52) at org.lwjgl.opengl.LinuxDisplay.createPeerInfo(LinuxDisplay.java:684) at org.lwjgl.opengl.Display.create(Display.java:854) at org.lwjgl.opengl.Display.create(Display.java:784) at org.lwjgl.opengl.Display.create(Display.java:765) at net.minecraft.client.Minecraft.a(SourceFile:235) at avv.a(SourceFile:56) at net.minecraft.client.Minecraft.run(SourceFile:507) at java.lang.Thread.run(Thread.java:679) A detailed walkthrough of the error, its code path and all known details is as follows: -- System Details -- Details: Minecraft Version: 1.5.2 Operating System: Linux (amd64) version 3.5.0-34-generic Java Version: 1.6.0_27, Sun Microsystems Inc. Java VM Version: OpenJDK 64-Bit Server VM (mixed mode), Sun Microsystems Inc. Memory: 406175448 bytes (387 MB) / 514523136 bytes (490 MB) up to 1908932608 bytes (1820 MB) JVM Flags: 2 total; -Xmx2048M -Xms512M AABB Pool Size: 0 (0 bytes; 0 MB) allocated, 0 (0 bytes; 0 MB) used Suspicious classes: No suspicious classes found. IntCache: cache: 0, tcache: 0, allocated: 0, tallocated: 0 LWJGL: 2.4.2 OpenGL: ~~ERROR~~ NullPointerException: null Is Modded: Probably not. Jar signature remains and client brand is untouched. Type: Client (map_client.txt) Texture Pack: Default Profiler Position: N/A (disabled) Vec3 Pool Size: ~~ERROR~~ NullPointerException: null I can run it on different versions of linux such as fedora.

    Read the article

  • What can I do to enhance MacBook Pro internal mic sound quality in iMovie?

    - by gaearon
    After using MacBook for two years, I bought 17'' MacBook Pro. I'm pretty happy with it, performance and all, but I also was going to record some music videos for YouTube. I play guitar and sing. However I was extremely disappointed with the sound quality that comes by default. I'm 100% sure my 13'' MacBook mic was much better at recording music and singing. Currently mic can't event handle acoustic guitar, outputting sound you'd think was recorded 5 years ago in ARM format on a Nokia phone on a loud concert. It totally feels like some lame filter is cutting low and high frequencies. I want to know what settings (visible or hidden) in iMovie or Mac OS itself I might want to tweak in order to get my MBP mic record clean sound.

    Read the article

  • Removing (Presumably) Extraneous Network Adapters from Device Manager (eg WAN Miniport)

    - by Synetech inc.
    Can anyone shed some light on the default items in the Network Adapters branch of the Windows Device Manager? In addition to the network card, there are always a bunch of other things that I cannot find any useful information on such as RAS Asynch Adapter and all the WAN Miniports (IKEv2, IP(v6), L2TP, Network Monitor, PPPOE, PPTP, SSTP). I would like to trim it down and uninstall whatever possible but cannot find out exactly what these items are responsible for (and therefore whether or not they are needed on my system). Most of the pages found with Google are either people trying to fix an error with such an item or someone asking what it is and being given an unhelpful, pat response like “just leave them alone” or “they’re necessary”. I highly doubt that is the case and I’m certain that at least some items can be removed because even if they become necessary in the future they can be added again (for example installing Network Monitor or Protowall reinstalls the miniport drivers anyway).

    Read the article

  • Apache connection intermittantly times out on LAN

    - by jonescb
    I'm using Fedora 12 with Apache 2.2.14, and I was having this error on 2.2.13 as well. Even when I connect to my server over LAN, Firefox will occasionally time out while connecting. I can't figure out what is causing this. The error log isn't showing anything. I even cleaned the error log file, so that if something happened, it'd be a little easier to spot. But I'm still getting time outs, and nothing in the error log. Can anybody help me find what the problem is? Here is my httpd.conf Pastebin It's the default Fedora configuration; I've only changed the ServerName if I remember correctly. I'm pretty sure it's not the Timeout setting, because on LAN it should never time out. I don't believe it's a load issue either, I'm the only one connecting to it. I'm not an Apache expert, so if more information is needed I'll need some instructions on how to get that data.

    Read the article

  • Pandaboard crash on startup or freeze after minutes

    - by Meach
    I just received my Pandaboard ES (rev B) and I am having trouble after installing ubuntu-omap4-addons. Once I copied the image ubuntu-12.04-preinstalled-desktop-armhf+omap4.img on my sd card and boot the pandaboard with it, I run the following commands: sudo add-apt-repository ppa:tiomap-dev/release sudo apt-get update sudo apt-get dist-upgrade sudo apt-get install ubuntu-omap4-extras At the end of the installation of ubuntu-omap4-extras, Ubuntu tells me that a problem occurs when the console displays: ldconfig deferred processing now taking place Clicking on "report the problem" tell me that the problem concerns pvr-omap4-dkms. I read somewhere that this can happen and it is better to reinstall pvr-omap4-dkms. Which I am doing by running: sudo apt-get install --reinstall pvr-omap4-dkms I reboot. Then the board has sometimes difficulties to start Ubuntu: it freezes during the loading page, only action I can do is unplugging the board to start it again. Some other times, Ubuntu load successfully but then freeze at another random time, in the range 20 - 40 minutes. I searched on internet for similar bug and found this: https://bugs.launchpad.net/ubuntu/+source/linux-ti-omap4/+bug/971091 So I typed this in: update-rc.d ondemand disable apt-get -y install cpufrequtils echo 'ENABLE="true" GOVERNOR="performance" MAX_SPEED="0" MIN_SPEED="0"' > /etc/default/cpufrequtils cpufreq-set -r -g performance reboot But it doesn't seems to fix the bug. Another detail: on startup, before the loading screen of Ubuntu (when there is the two penguins displayed :)), it shows this: [0.297271] CPU1: Unknown IPI message 0x1 [0.308990] omap_hwmod: mcpdm: _wait_target_ready error: -16 [0.354705] omap_mux_get_by_name: Could not find signal uart1_cts.uart1_cts [0.354766] omap_hwmod_mux_init: Could not allocate device mux entry [2.107086] thermal_get_slope:Getting slope is not supported for domain gpu [2.107116] thermal_get_offset:Getting offset is not supported for domain gpu [2.107299] stm_fw: vendor driver stm_ti1.0 registered [8.725555] OMAPRPC: Registration of OMAPRPC rpmsg service returned 0! debug=0 Any idea what can be wrong? I am not that good with Ubuntu so any help will be appreciated. Cheers! Meach

    Read the article

  • ZSH - output whole history?

    - by GorillaSandwich
    I recently switched from bash to zsh. In bash one way (besides recursive search) that I used to find previously-run commands was history | grep whatever, where whatever is the bit of command I remember. In zsh, this isn't working. history returns only a few items, even though my .zsh_history file contains many entries, which I have configured it to do. How can I output my whole history, suitable for searching with grep? (Note: I started out using ryanb's dotfiles, so perhaps it's a problem with his default settings?)

    Read the article

  • Investigating Strategies For Functional Decomposition

    - by Liam McLennan
    Introducing Functional Decomposition Before I begin I must apologise. I think I am using the term ‘functional decomposition’ loosely, and probably incorrectly. For the purpose of this article I use functional decomposition to mean the recursive splitting of a large problem into increasingly smaller ones, so that the one large problem may be solved by solving a set of smaller problems. The justification for functional decomposition is that the decomposed problem is more easily solved. As software developers we recognise that the smaller pieces are more easily tested, since they do less and are more cohesive. Functional decomposition is important to all scientific pursuits. Once we understand natural selection we can start to look for humanities ancestral species, once we understand the big bang we can trace our expanding universe back to its origin. Isaac Newton acknowledged the compositional nature of his scientific achievements: If I have seen further than others, it is by standing upon the shoulders of giants   The Two Strategies For Functional Decomposition of Computer Programs Private Methods When I was working on my undergraduate degree I was taught to functionally decompose problems by using private methods. Consider the problem of painting a house. The obvious solution is to solve the problem as a single unit: public void PaintAHouse() { // all the things required to paint a house ... } We decompose the problem by breaking it into parts: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { // everything required to paint the undercoat } private void PaintTopcoat() { // everything required to paint the topcoat } The problem can be recursively decomposed until a sufficiently granular level of detail is reached: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { prepareSurface(); fetchUndercoat(); paintUndercoat(); } private void PaintTopcoat() { fetchPaint(); paintTopcoat(); } According to Wikipedia, at least one computer programmer has referred to this process as “the art of subroutining”. The practical issues that I have encountered when using private methods for decomposition are: To preserve the top level API all of the steps must be private. This means that they can’t easily be tested. The private methods often have little cohesion except that they form part of the same solution. Decomposing to Classes The alternative is to decompose large problems into multiple classes, effectively using a class instead of each private method. The API delegates to related classes, so the API is not polluted by the sub-steps of the problem, and the steps can be easily tested because they are each in their own highly cohesive class. Additionally, I think that this technique facilitates better adherence to the Single Responsibility Principle, since each class can be decomposed until it has precisely one responsibility. Revisiting my previous example using class composition: public class HousePainter { private undercoatPainter = new UndercoatPainter(); private topcoatPainter = new TopcoatPainter(); public void PaintAHouse() { undercoatPainter.Paint(); topcoatPainter.Paint(); } } Summary When decomposing a problem there is more than one way to represent the sub-problems. Using private methods keeps the logic in one place and prevents a proliferation of classes (thereby following the four rules of simple design) but the class decomposition is more easily testable and more compatible with the Single Responsibility Principle.

    Read the article

  • How do I set up multiple Magento sites from the same domain?

    - by Jenx222
    I have a Magento installation with two sites, each with a shop and a view. I have an EU store in one site and a NON-EU store in the other. Both sites use a different currency. At present both of these websites are located on the same domain. I have been able to switch between stores using cookies but this seems to cause an inherit amount of problems. Every time a user creates an account on the non-default shop they get a blank error message. They also get a blank error message when they log in. Can anyone point me in the right direction? I need to use a different currency for each store but they all need to be on the same domain.

    Read the article

  • 504 Gateway Time-out after php fatal error

    - by tiagojsag
    I'm using nginx and php-fpm to develop a symfony2 based website, under ubuntu 12.10 (yes, I know I'm using a beta OS). Everything was working out fine until, due to an error on my code, I called an unexisting function, and got the following: Fatal error: Call to a member function (....) This isn't a problem (it's a bug in my code, easily fixable), but after this, no other page loads. My browser just keeps trying to load the page from the webserver, until nginx timeouts (after +- 30s, which should be some default timeout) and returns: 504 Gateway Time-out Restarting php-fpm solves the issue. Nginx logs show a timeout message, and nothing appears on php-fpm logs, even if I set them to debug level. I tried switching from fpm to fastcgi, and the same thing happens. I've looked around, but all similar error are related to big requests/file handling, which isn't the case. All the pages on my website load in a few seconds, even under development conditions (no caching, etc).

    Read the article

  • Untangle VPN setup, how to see internal addresses?

    - by NFS user
    So Untangle is setup as the default gateway at 192.168.100.1/24, it is the authorative DHCP server issuing addresses from 192.168.100.100 to 192.168.100.200 and is successfully connected to the Internet. Untangle uses OpenVPN for remote access. Accessing the VPN gives me the address 192.168.40.5. However, I cannot ping any machines on the internal 192.168.100.x network remotely. Clearly, there is something basic that I am missing. What is it and how is it solved? Update: The VPN was not setup with the internal network. Since Untangle only allows editing the VPN setup once, the VPN had to be removed and reinstalled with the internal network exported. Now it works. The lesson is that the internal network must be setup before configuring the VPN.

    Read the article

  • OpenVPN route missing

    - by dajuric
    I can connect to an OpenVPN server from Windows without any problems. But when I try to connect from Ubuntu 12.04 (start OpenVPN) I receive the following: OpenVPN needs a gateway parameter for a --route option and no default was specified by either --route-gateway or --ifconfig options SERVER IP: 161.53.X.X internal network: 10.0.0.0 / 8 What I need to do ? client configuration: client dev tap proto udp remote 161.53.X.X 1194 resolv-retry infinite nobind ca ca.crt cert client.crt key client.key ns-cert-type server comp-lzo verb 3 server conf: local 161.53.X.X port 1194 proto udp dev tap dev-node OpenVPN ca ca.crt cert server.crt key server.key # This file should be kept secret dh dh1024.pem # DHCP leases addresses to clients server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. push "route 10.0.0.1 255.255.0.0" client-to-client duplicate-cn keepalive 10 120 comp-lzo verb 6

    Read the article

  • Windows 8.1 Professional Hyper-V - can I create my own linux VM through it?

    - by KevinM1
    I have a Windows 8.1 Professional desktop, and would like to have a Linux Mint VM on it so I can do some opensource web development work. I've tried VirtualBox, but it's giving me an IO error during the Mint .iso installation. VMWare Player flat out tells me it can't install because I have Hyper-V on my machine by default. My biggest concern is nuking the desktop, which is where I do most of my non-development work/browsing/etc. The desktop seems to be listed as a VM itself in the Hyper-V Manager, and I don't want to accidentally break it. So, two questions: Can I create the Mint VM I want/need for my work? Can I do it without messing up the desktop?

    Read the article

  • SPF record doesn't work (not sure which DNS server to tweak)

    - by Ion
    Problem: Google (and perhaps others) marks our emails as SPF neutral. Let me give you some background about the setup: initially got a dedicated server (Hetzner) with Plesk installed to host a domain/web application, let's say: bigjaws.com. Plesk automatically creates a DNS zone for it with some records for the various services it provides out of the box, e.g. webmail.bigjaws.com as a CNAME to bigjaws.com to provide Horde/whatever, etc. Let me point out four relevant of these records (where XXX.XXX.XXX.158 is our dedicated IP): bigjaws.com. A XXX.XXX.XXX.158 mail.bigjaws.com. A XXX.XXX.XXX.158 bigjaws.com MX (10) mail.bigjaws.com. bigjaws.com. TXT v=spf1 +a +mx -all The above records are not(?) valid anymore though, because after using this dedicated server for a while, our site got bigger and bigger so we decided to move our operations over to AWS (EC2, RDS, ELB, etc), but we retained the mail functionality as is, i.e. emails from [email protected] are sent by connecting to our dedicated server where Plesk takes care of things. This was decided in order not to setup anything from scratch. Of course for all DNS-related things we now use Route53. In Route53 I have the following records: mail.schoox.com. A XXX.XXX.XXX.158 bigjaws.com. MX (10) mail.bigjaws.com bigjaws.com. SPF "v=spf1 +ip4:XXX.XXX.XXX.158 +mx ~all" From my understanding of SPF, the SPF status should have been passed: I designate that all email being sent by bigjaws.com from XXX.XXX.XXX.158 are valid/not spam (I added +mx there but I'm not sure if needed). When a mail server receives an email, doesn't it lookup the SPF record of the domain and checks against the IP it got the email from? Checking with spfquery: root@box:~# spfquery -ip XXX.XXX.XXX.158 -sender [email protected] -rcpt-to [email protected] StartError Context: Failed to query MAIL-FROM ErrorCode: (2) Could not find a valid SPF record Error: No DNS data for 'bigjaws.com'. EndError noneneutral Please see http://www.openspf.org/Why?id=employee1%40bigjaws.com&ip=XXX.XXX.XXX.158&receiver=spfquery : Reason: default spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com Received-SPF: neutral (spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com) client-ip=XXX.XXX.XXX.158; [email protected]; If I go to the address listed above (openspf.org) it tells me that the message should have been accepted(!): spfquery rejected a message that claimed an envelope sender address of [email protected]. spfquery received a message from static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) that claimed an envelope sender address of [email protected]. The domain bigjaws.com has authorized static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) to send mail on its behalf, so the message should have been accepted. It is impossible for us to say why it was rejected. What should I do? If the problem persists, contact the bigjaws.com postmaster. Also, here are some headers from an email sent by one of our [email protected] addresses to a gmail.com address (by the way, bigjaws.de listed in the "Received: from" field was the initial domain hosted on the dedicated server before adding the .com one -- both are still listed as separate subscriptions under Plesk). Delivered-To: [email protected] Received: by 10.14.177.70 with SMTP id c46csp289656eem; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) X-Received: by 10.14.102.66 with SMTP id c42mr306186eeg.47.1382515860386; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) Return-Path: <[email protected]> Received: from bigjaws.de (static.158.XXX.XXX.XXX.clients.your-server.de. [XXX.XXX.XXX.158]) by mx.google.com with ESMTPS id l4si19438578eew.161.2013.10.23.01.10.59 for <[email protected]> (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 23 Oct 2013 01:10:59 -0700 (PDT) Received-SPF: neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=XXX.XXX.XXX.158; Authentication-Results: mx.google.com; spf=neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=bigjaws.com; b=WwRAS0WKjp9lO17iMluYPXOHzqRcOueiQT4rPdvy3WFf0QzoXiy6rLfxU/Ra53jL1vlPbwlLNa5gjoJBi7ZwKfUcvs3s02hJI7b3ozl0fEgJtTPKoCfnwl4bLPbtXNFu; h=Received:Received:Message-ID:Date:From:User-Agent:MIME-Version:To:Subject:Content-Type:Content-Transfer-Encoding; Received: (qmail 22722 invoked from network); 23 Oct 2013 10:10:59 +0200 Received: from hostname.static.ISP.com (HELO ?192.168.1.60?) (YYY.YYY.ISP.IP) by static.158.XXX.XXX.XXX.clients.your-server.de. with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 23 Oct 2013 10:10:59 +0200 Message-ID: <[email protected]> Date: Wed, 23 Oct 2013 11:11:00 +0300 From: BigJaws Employee <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0.1 MIME-Version: 1.0 To: [email protected] Subject: test SPF Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit test SPF Any ideas why SPF is not working correctly? Also, are there any DNS settings that are not needed anymore and create a problem?

    Read the article

  • Varnish VCL not allowing two separate IP addresses as backends

    - by Peter Griffin
    Every time I attempt to add an extra back end into our VCL file, it's fails. Here is the DAEMON_OPTS we are running off: DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/custom.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -s malloc,10G" And here is the offending backend(s) backend default { .host = "114.123.456.789"; .port = "8080"; } backend alt { .host = "203.123.456.789"; .port = "80"; } Any Ideas ? Gut feeling is it might need the backends to be set somewhere, but I'm not sure where.

    Read the article

  • Using HTML5 Today part 4&ndash;What happened to XHTML?

    - by Steve Albers
    This is the fourth entry in a series of descriptions & demos from the “Using HTML5 Today” user group presentation. For practical purposes, the original XHTML standard is a historical footnote, although XHTML transitional will probably live on forever in the default web page templates of old web page editors. The original XHTML spec was released in 2000, on the heels of the HTML 4.01 spec.  The plan was to move web development away from HTML to the more formal, rigorous approach that XHTML offered, but it was built on a principle that conflicts with the history and culture of the Internet: XHTML introduced the idea of Draconian Error Handling, which essentially means that invalid XML markup on a page will cause a page to stop rendering. There is a transitional mode offered in the original XHTML spec, but the goal was to move to D.E.H.  You can see the result by changing the doc type for a document to “application/xhtml+xml” - for my class example we change this setting in the web.config file: <staticContent> <remove fileExtension=".html" /> <mimeMap fileExtension=".html" mimeType="application/xhtml+xml" /> </staticContent> With the new strict syntax a simple error, in this case a duplicate </td> tag, can cause a critical page error: While XHTML became very popular in the ensuing decade, the Strict form of XHTML never achieved widespread use. Draconian Error Handling was one of the factors that led in time to the creation of the WHATWG, or Web Hypertext Application Technology Group.  WHATWG contributed to the eventually disbanding of the XHTML 2.0 working group and the W3C’s move to embrace the HTML5 standard. For developers who long for XML markup the W3C HTML5 standard includes an XHTML5 syntax. For the longer, more definitive look at what happened to XHTML and how HTML5 came to be check out the Dive Into HTML mirror site or Bruce Lawson’s “HTML5: Who, What, When Why” talk.

    Read the article

  • deploying AV via GPO only to workstations

    - by jeremy
    We have a small (100 machines) Windows domain running Server 2008R2. We use Symantec Endpoint Protection 12.1 I want to have GPO deploy the AV software to client machines automatically, but only to client workstations, not to servers, which run a different software. I've set it up before using a GPO linked to the domain mycompany.local and it works, but it deploys the AV software to ALL machines on the domain, including my servers. I can create an OU in active directory for Servers, and perhaps create one for client machines too, but I'd rather not have to go and move new domain members from the default under Computers into a different folder. How can I use GPO to deploy this AV software only to workstations on our network, and not to servers?

    Read the article

  • What is the best policy for user account on a Windows 7 Media Center shared by the whole family

    - by Matt Spradley
    What is a good way to manage user accounts for a Windows 7 Media Center PC that is part of an entertainment center for a family? Each family member keeps most of their personal stuff on their own computer. I was thinking the simple approach would be to create an admin account for management and then just create a "Family" user account w/o a password that is the default account used by the media center. This account would be used for the PVR, playing blu rays, music, etc. I don't think it is practical for someone to have to log in every time they use the media center.

    Read the article

  • .htaccess does not ask the password

    - by Sarp Kaya
    I am using Ubuntu 12.04 and trying to use .htaccess on a page with apache2 server on it. My .htaccess file looks like this: AuthType Basic AuthName "Password Required" AuthBasicProvider file AuthUserFile /home/janeb/.htpasswd require valid-user /home/janeb/.htpasswd file is: inb351:$apr1$Ya4tzYvr$KCs3eyK1O2/c7Q9zRcwn.. and /etc/apache2/sites-available/default file is : UserDir public_html <Directory ~ "public_html/.*"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> I restarted apache. I have tried to change require valid-user to require user inb351. Still no luck. I also tried AllowOverride with AuthConfig and AuthConfig Indexes. So I don't know what else to do, and yes every step that I have tried I restarted apache.

    Read the article

  • Better solution for boolean mixing?

    - by Ruben Nunez
    Sorry if this question has been asked in the past, but searching Google and here didn't yield relevant results, so here goes. I'm working on a fragment shader that implements both conditional/boolean diffuse and bump mapping (that is to say, you don't need a diffuse texture or a normals texture, and if they're not present, they're simply changed to default values). My current solution is to use a uniform float to say "mix amount". For example, computing the diffuse texel works as: // Compute diffuse amount scaled by vCol // If no texture is present (mDif = 0.0), then DiffuseTexel = vCol // kT[0] is the diffuse texture // vTex is the texture co-ordinates // mDif is the uniform float containing the mix amount (either 0.0 or 1.0) vec4 DiffuseTexel = vCol*mix(vec4(1.0), texture2D(kT[0], vTex), mDif); While that works great and all, I was wondering if there's a better way of doing this, as I will never have any use for in-between values for funky effects. I know that perhaps the best solution is to simply write separate shaders for mDif=0.0 and mDif=1.0, but I'd like a more elegant solution than splicing shaders before compiling or writing multiple shader files and keeping each one updated. Any ideas are greatly appreciated. =)

    Read the article

  • Ubuntu: move logs from /dev/tty8 to different terminal /dev/tty12 or get rid of it.

    - by Casual Coder
    I want to know how to move or get rid of /dev/tty8 log output in Ubuntu 9.10. /dev/tty7 is my regular X session. When I am switching user to test account where I can try and test setups and configs I am at next available console i.e. /dev/tty9 because /dev/tty8 is taken by log output. Where can I configure this ? All I've found related to /dev/tty8 is commented lines in /etc/rsyslog.d/50-default.conf. I changed it like that: daemon,mail.*;\ news.=crit;news.=err;news.=notice;\ *.=debug;*.=info;\ *.=notice;*.=warn /dev/tty12 And I've got nice log output on /dev/tty12 but where is configuration for log output on /dev/tty8. How can I change it?

    Read the article

  • Can't drag files from Explorer into Notepad++ running as administrator on Windows 8

    - by Luke Foreman
    If I have Notepad++ running as administrator, I can't drag files from explorer on to it (they are rejected with the 'stop' cursor) and if I try to use the explorer extension right click 'Edit with Notepad++' it throws an error. Opening the files using the Notepad++ 'Open' dialog, or even double clicking them in Explorer works as it should. (Note double clicking is not a solution as very few of the files I want to open are default to Notepad++) I have UAC set to 'never notify'. Using the hack where UAC 'admin approval mode' is disabled fixes the issue, but kills the ability to use Metro apps.

    Read the article

  • Installing maven on Ubuntu by manual download

    - by WebDevHobo
    To install Maven, I downloaded the latest version from the website and then followed these steps: http://maven.apache.org/download.html#Installation The last step, the version control, does not work. It says that 'mvn' is currently not installed and that I should type sudo apt-get install maven2 If I go directly to the mvn file itself, it does work: root@ubuntu:~# /usr/local/apache-maven/apache-maven-2.2.1/bin/mvn --version Apache Maven 2.2.1 (r801777; 2009-08-06 12:16:01-0700) Java version: 1.6.0_21 Java home: /usr/java/jdk1.6.0_21/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux" version: "2.6.32-25-generic" arch: "i386" Family: "unix" So, what am I doing wrong here? Or what would and apt-get install do extra that I might have forgotten?

    Read the article

  • how to set auto redirection in tomcat

    - by Registered User
    I have a site http://social.openitup.in right now what you are seeing is a default Tomcat6 page. I am using mod_ajp as a front end and Apache vhost configuration for same is <VirtualHost *:80 > ServerName social.openitup.in ServerAdmin webmaster@localhost ProxyRequests off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On ProxyPass / ajp://192.168.1.19:8009/ ProxyPassReverse / ajp://192.168.1.19:8009/ </VirtualHost> How ever I have an application running on it http://social.openitup.in/olat what I want to do is when some one opens http://social.openitup.in then rather than seeing Tomcat6 home page from /var/lib/tomcat6/webapps/ROOT/index.html the person is redirected to olat application which is in /var/lib/tomcat6/webapps/olat how can this be achived? The above vhost configuration is on a machine separate than where OLAT is running.

    Read the article

< Previous Page | 664 665 666 667 668 669 670 671 672 673 674 675  | Next Page >