Search Results

Search found 27016 results on 1081 pages for 'entry point'.

Page 551/1081 | < Previous Page | 547 548 549 550 551 552 553 554 555 556 557 558  | Next Page >

  • ArchBeat Link-o-Rama Top 10 for September 2-8, 2012

    - by Bob Rhubart
    The Top 10 items shared on the OTN Facebook Page for the week of September 2-8, 2012. Adding a runtime LOV for a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena illustrates how to customize the parameters tab for a taskflow in WebCenter. Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET | Scott Nelson "It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings," says Scott Nelson. "Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there." Free Event: Oracle Technology Network Architect Day – Boston, MA – 9/12/2012 Sure, you could ask a voodoo priestess for help in improving your solution architecture skills. But there's the whole snake thing, and the zombie thing, and other complications. So why not keep it simple and register for Oracle Technology Network Architect Day in Boston, MA. There's no magic, just a full day of technical sessions covering Cloud, SOA, Engineered Systems, and more. Registration is free, but seating is limited. You'll curse yourself if you miss this one. Starting and Stopping Fusion Applications the Right Way | Ronaldo Viscuso While the fastartstop tool that ships with Oracle Fusion Applications does most of the work to start/stop/bounce the Fusion Apps environment, it does not do it all. Oracle Fusion Applications A-Team blogger Ronaldo Viscuso's post "aims to explain all tasks involved in starting and stopping a Fusion Apps environment completely." Article Index: Architect Community Column in Oracle Magazine Did you know that Oracle Magazine features a regular column devoted specifically to the architect community? Every issue includes insight and expertise from architects who regularly work with Oracle Technologies. Click here to see a complete list of these articles. Using FMAP and AnalyticsRes in a Oracle BI High Availability Implementation | Art of Business Intelligence "The fmap syntax has been used for a long time in Oracle BI / Siebel Analytics when referencing images inherent in the application as well as custom images," says Oracle ACE Christian Screen. "This syntax is used on Analysis requests an dashboards." Dodeca Customer Feedback - The Rosewood Company | Tim Tow Oracle ACE Director Tim Tow shares anecdotal comments from one of his clients, a company that is deploying Dodeca to replace an aging VBA/Essbase application. Configuring UCM cache to check for external Content Server changes | Martin Deh Oracle WebCenter and ADF A-Team blogger shares the background information and the solution to a recently encountered customer scenario. Attend OTN Architect Day in Los Angeles – by Architects, for Architects – October 25 The OTN Architect Day roadshow stops in Boston next week, then it's on to Los Angeles for another all architecture, all day event on Thursday October 25, 2012 at the Sofitel Los Angeles, 555 Beverly Boulevard, Los Angeles, CA 90048. Like all Architect Day events, this one is absolutley free, so register now. The Role of Oracle VM Server for SPARC In a Virtualization Strategy New OTN article from Matthias Pfutzner. Thought for the Day "Practicing architects, through eduction, experience and examples, accumulate a considerable body of contextual sense by the time they're entrusted with solving a system-level problem…" — Eberhardt Rechtin (January 16, 1926 – April 14, 2006) Source: SoftwareQuotes.com

    Read the article

  • How to copy partition from one disc to another (boot partition keeping all the vital data)?

    - by Patryk
    I have bought a new laptop but the HDD, which runs at 5400 rpm, is not sufficient for me. The laptop runs Windows 7 64-bit. I have my 'old' one (a better one - Seagate Momentus 7200 rpm) and I would like to replace it but without reinstalling everything. And there my question arises: can I copy my boot partition from my laptop hard drive to my old drive so that it will boot from it properly? If so, then how to do it? Will Norton Ghost be useful here? My point would be to just replace this partition and leave the rest.

    Read the article

  • Weighted round robins via TTL - possible?

    - by Joe Hopfgartner
    I currently use DNS round robin for load balancing, which works great. The records look like this (I have a ttl of 120 seconds) ;; ANSWER SECTION: orion.2x.to. 116 IN A 80.237.201.41 orion.2x.to. 116 IN A 87.230.54.12 orion.2x.to. 116 IN A 87.230.100.10 orion.2x.to. 116 IN A 87.230.51.65 I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the ip address. However if the userbase is big enough (spreads over multiple ISPs etc) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%. However now I have the problem that I am introducing more servers into the systems, that not all have the same capacities. I currently only have 1gbps servers, but I want to work with 100mbit and also 10gbps servers too. So what I want is I want to introduce a server with 10 GBps with a weight of 100, a 1 gbps server with a weight of 10 and a 100 mbit server with a weight of 1. I used to add servers twice to bring more traffic to them (which worked nice. the bandwidth doubled almost.) But adding a 10gbit server 100 times to DNS is a bit rediculous. So I thought about using the TTL. If I give server A 240 seconds ttl and server B only 120 seconds (which is about about the minimum to use for round robin, as a lot of dns servers set to 120 if a lower ttl is specified.. so i have heard) I think something like this should occour in an ideal scenario: first 120 seconds 50% of requests get server A -> keep it for 240 seconds. 50% of requests get server B -> keep it for 120 seconds second 120 seconds 50% of requests still have server A cached -> keep it for another 120 seconds. 25% of requests get server A -> keep it for 240 seconds 25% of requests get server B -> keep it for 120 seconds third 120 seconds 25% will get server A (from the 50% of Server A that now expired) -> cache 240 sec 25% will get server B (from the 50% of Server A that now expired) -> cache 120 sec 25% will have server A cached for another 120 seconds 12.5% will get server B (from the 25% of server B that now expired) -> cache 120sec 12.5% will get server A (from the 25% of server B that now expired) -> cache 240 sec fourth 120 seconds 25% will have server A cached -> cache for another 120 secs 12.5% will get server A (from the 25% of b that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of b that now expired) -> cache 120 secs 12.5% will get server A (from the 25% of a that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of a that now expired) -> cache 120 secs 6.25% will get server A (from the 12.5% of b that now expired) -> cache 240 secs 6.25% will get server B (from the 12.5% of b that now expired) -> cache 120 secs 12.5% will have server A cached -> cache another 120 secs ... i think i lost something at this point but i think you get the idea.... As you can see this gets pretty complicated to predict and it will for sure not work out like this in practice. But it should definitely have an effect on the distribution! I know that weighted round robin exists and is just controlled by the root server. It just cycles through dns records when responding and returns dns records with a set propability that corresponds to the weighting. My DNS server does not support this, and my requirements are not that precise. If it doesnt weight perfectly its okay, but it should go into the right direction. I think using the TTL field could be a more elegant and easier solution - and it deosnt require a dns server that controls this dynamically, which saves resources - which is in my opinion the whole point of dns load balancing vs hardware load balancers. My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records? Edit: The system is a forward proxy server system. The amount of Bandwidth (not requests) exceeds what one single server with ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers. Are there any alternative methods than using DNS? Of course I can use a load balancer with fibre channel etc, but the costs are rediciulous and it also increases only the width of the bottleneck and does not eliminate it. The only thing i can think of are anycast (is it anycast or multicast?) ip addresses, but I don't have the means to set up such a system.

    Read the article

  • Client side latency when using prediction

    - by Tips48
    I've implemented Client-Side prediction into my game, where when input is received by the client, it first sends it to the server and then acts upon it just as the server will, to reduce the appearance of lag. The problem is, the server is authoritative, so when the server sends back the position of the Entity to the client, it undo's the effect of the interpolation and creates a rubber-banding effect. For example: Client sends input to server - Client reacts on input - Server receives and reacts on input - Server sends back response - Client reaction is undone due to latency between server and client To solve this, I've decided to store the game state and input every tick in the client, and then when I receive a packet from the server, get the game state from when the packet was sent and simulate the game up to the current point. My questions: Won't this cause lag? If I'm receiving 20/30 EntityPositionPackets a second, that means I have to run 20-30 simulations of the game state. How do I sync the client and server tick? Currently, I'm sending the milli-second the packet was sent by the server, but I think it's adding too much complexity instead of just sending the tick. The problem with converting it to sending the tick is that I have no guarantee that the client and server are ticking at the same rate, for example if the client is an old-end PC.

    Read the article

  • Decoupling software components via naming convention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • SBS 2008 DNS issues in BPA

    - by evesirim
    I'm gettng constant Critical Issue events in the Small Business Server Best Practices Analyser & resulting SBS Console reports that begin "The DNS client is not configured to point only to the internal IP address of the server.". When I check the DNS Manager, there are two separate IPs listed, one ***.***.***.2 and the other ***.***.***.28. I have checked online after an ipconfig /all and have found the reason for this to be that the second IP is created by DHCP for RAS & VPN purposes. It seems to cause no conflicts of any detrimental result apart from constantly sending me error messages and alerted reports. Does anyone know of a way that I can change settings somewhere so that Windows accepts this second IP, or at least stops alerting me of its prescence? Perhaps a registry hack of some kind...? Many thanks in advance

    Read the article

  • Reset / Remove - Google Keywords

    - by Herr Kaleun
    Summary: My site is ranking for filthy keywords and i would like to remove them from google ranking/keywords. Background: My server was hacked using the timthumb exploit/security vulnerability, apparently i was the last person on earth to read the news about the exploit, several months after it appeared. Anyway, the "hacker" was so friendly to modify the index.php file in such a fashion, that it generated random sexual oriented keywords if the website is fetched as google-bot. So if you would fetch it as google bot/it gets indexed, you would get randomly generated keywords like: sex videos teenager teen sex adult sex preteen A LINK TO A RANDOM CONTENT OF MY WEBPAGE anime sex videos a rough list something similar to that, about 180-200 per page. I've discovered it far too late, so that google had me indexed for the words "sex" and certain adult oriented keywords, about roughly 2000. I've removed all the content, toke the site down, replaced the index.php with a static HTML and added a "ERROR 410" title to the website so that the content is no longer here and removed permanently. I've also applied for a manual review of my website, about 1.5 months ago but still, the keywords are there, and very strange, some of the keyword rankings actually "improve" over time. Here are some screenshots from webmasters tools: Question: How can i remove this filthy keywords and re-rank my website as a "normal" website on the fastest way? I want to "REMOVE" the keywords if possible. Please help me or point me into a direction. Thank you

    Read the article

  • Mount TMPFS instead of ro /dev

    - by schiggn
    I am working on a ARM-Based embedded system with a custom Debian Linux based on kernel 2.6.31. In the final system, the Root file system is stored as squashfs on flash. Now, the folder /dev is created by udev, but since there is no hot plugging functionality needed and booting time is critical, I wanted to delete udev and "hard code" the /dev folder (read here, page 5). because i still need to change parameters of the devices (with ioctl /sysfs) this does not work for me in this case. so i thought of mounting a tmpfs on /dev and change the parameters there. is this possible? and how to do best? my approach would be: delete /dev from RFS create tar containing basic devices mount tmpfs /dev untar tar-file into /dev change parameters Could this work? Do you see any problems? I found out, that you can mount on top of already mounted mount point, is it somehow possible just to take data with while mounting the new file system? if so that would be very convenient! Thanks Update: I just tried that out, but I'm stuck at a certain point. I packed all my devices into devices.tar, packed it into /usr of my squashfs and added the following lines to mountkernfs.sh, which is executed right after INIT. #mount /dev on tmpfs echo -n "Mounting /dev on tmpfs..." mount -o size=5M,mode=0755 -t tmpfs tmpfs /dev mknod -m 600 /dev/console c 5 1 mknod -m 600 /dev/null c 1 3 echo "done." echo -n "Populating /dev..." tar -xf /usr/devices.tar -C /dev echo "done." This works fine on the version over NFS, if I place printf's in the code, I can see it executing, if I comment out the extracting part, its complaining about missing devices. Booting OK mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. System Clock set to: Thu Sep 13 11:26:23 UTC 2012. INIT: Entering runlevel: 2 UBI: attaching mtd8 to ubi0 Commenting out the extraction of the tar mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. Cannot access the Hardware Clock via any known method. Use the --debug option to see the details of our search for an access method. Unable to set System Clock to: Thu Sep 13 12:24:00 UTC 2012 ... (warning). INIT: Entering runlevel: 2 libubi: error!: cannot open "/dev/ubi_ctrl" So far so good. But if I pack the whole story into a squashfs and boot from there, it is acting strange. It's telling me while booting that it is unable to open an initial console and its throwing errors on mounting the UBIFS devices, but finally provides a login anyway. Over that my echo's are not executed. If I then log in, /dev is mounted as TMPFS as desired and all the devices reside inside. When I redo the "mount" command to mount the UBIFS partitions it is executed whitout problem and useable. From squashfs VFS: Mounted root (squashfs filesystem) readonly on device 31:15. Freeing init memory: 136K Warning: unable to open an initial console. mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 UBIFS error (pid 484): ubifs_get_sb: cannot open "ubi1_0", error -19 Additionally, a part of the rest of the bootscripts is still exexuted, but not all of them. Does anyone has a clue why? Other question, is 5MB enough/too much for /dev?

    Read the article

  • Steam on 64-bit 14.04: need some help, missing a few 32-bit libs

    - by YellowShark
    Steam says I'm missing the following libs, I'm hoping someone can help me get things in better shape: xyz@abc:~$ STEAM_RUNTIME=0 steam Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME is disabled by the user Error: You are missing the following 32-bit libraries, and Steam may not run: libpangoft2-1.0.so.0 libpango-1.0.so.0 libgtk-x11-2.0.so.0 libgdk_pixbuf-2.0.so.0 Installing breakpad exception handler for appid(steam)/version(1401381906_client) Installing breakpad exception handler for appid(steam)/version(1401381906_client) [2014-06-11 20:45:39] Startup - updater built May 29 2014 09:19:23 [2014-06-11 20:45:39] Verifying installation... [2014-06-11 20:45:39] Verification complete [2014-06-11 20:45:42] Shutdown I tried installing the following i386 packages: libpango-1.0-0:i386, libpangoft2-1.0-0:i386, and libgdk-pixbuf2.0-0:i386, and symlinking the .so files (from usr/lib/i386whatever../) into the ~/.local/share/Steam/ubuntu12_32/ folder, but wasn't able to find the right match for the gtk-x11 lib, and ultimately would up with a different, but still non-working situation. So I've back-tracked to this point, and have removed those i386 packages for now. It's worth noting thatSteam runs if I don't use STEAM_RUNTIME=0. Also, Steam seemed to "recognize" the i386 version of the libpango & libpangoft2 libs after I symlinked them into place, during the course of my troubleshooting; when I would rerun STEAM_RUNTIME=0 steam, it wouldn't list those two items as missing anymore. Instead though, I had a bunch of gtk-related issues, something about overlay-scrollbar not available, as well as warnings that it can't find the murrine engine... a whole bunch of stuff that sounded like I'd gone too far down the wrong path. Anyhow, any help sorting this out would be appreciated, and thanks!

    Read the article

  • Setting up routing for MS DirectAccess to a VMWare EsXi Host

    - by Paul D'Ambra
    I'm trying to set up DirectAccess on a virtual machine so I can demonstrate it's value and then if need be add a physical machine to host it. I'm hitting a problem because the Direct Access machine (DA01) needs to have 2 public addresses actually configured on the external adapter but there is a Zyxel Zywall USG300 between the VMware ESXi host and the outside world. I've summarised my setup in this diagram If I ping from the LAN to 212.x.y.89 I get a response but if I ping from the VM I get destination host unreachable. I used "route add 212.x.y.89 192.c.d.1" and get request timed out. At that point I see outbound traffic allowed on the Zyxel firewall but nothing coming back. I'm past my understanding of routing and VMWare so am not sure how to tie down where my problem lies (or even if this setup is possible). So any help massively appreciated. Paul

    Read the article

  • Multiple URLs on one domain in IIS 6 [closed]

    - by TGnat
    Possible Duplicate: How to Host Multiple Domains / Web Sites on one IIS6 Server I'm a developer and only know the basics of IIS administration. I want to give each of my clients a personalized URL to access my web site. I have mySite.com and I would like to allow client A to acess the site via A.mySite.com and client B to use B.mySite.com. These would all point the the mySite.com domain with the same ip address. Can this be done in IIS? Do I have to register new domains for each client? Are there other ways to accomplish the same thing?

    Read the article

  • How Can I Start an Incognito/Private Browsing Window from a Shortcut?

    - by Jason Fitzpatrick
    Sometimes you just want to pop the browser open for a quick web search without reloading all your saved tabs; read on as we show a fellow reader how to make a quick private-browsing shortcut. Dear How-To Geek, I came up with a solution to my problem, but I need your help implementing it. I typically have a ton of tabs open in my web browser and, when I need to free up system resources when gaming or using a resource intense application, I shut down the web browser. The problem arises when I find myself needing to do quick web search while the browser is shut down. I don’t want to open it up, load all the tabs, and waste the resources in doing so all for a quick Google search. The perfect solution, it would seem, is to open up one of Chrome’s Incognito windows: it loads separate, it won’t open up all the old tabs, and it’s perfect for a quick Google search. Is there a way to launch Chrome with a single Incognito window open without having to open the browser in the normal mode (and load the bazillion tabs I have sitting there)? Sincerely, Tab Crazy That’s a rather clever work around to your problem. Since you’ve already done the hard work of figuring out the solution you need, we’re more than happy to help you across the finish line. The magic you seek is available via what are known as “command line options” which allow you to add additional parameters and switches onto a command.   By appending the command the Chrome shortcut uses, we can easily tell it to launch in Incognito mode. (And, for other readers following along at home, we can do the same thing with other browsers like Firefox). First, let’s look at Chrome’s default shortcut: If you right click on it and select the properties menu, you’ll see where the shortcut points: "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" If you run that shortcut, you’ll open up normal browsing mode in Chrome and your saved tabs will all load. What we need to do is use the command line switches available for Chrome and tell it that we want it to launch an Incognito window instead. Doing so is as simple as appending the end of the “Target” box’s command line entry with -incognito, like so: "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" -incognito We’d also recommend changing the icon to it’s easy to tell the default Chrome shortcut apart from your new Incognito shortcut. When you’re done, make sure to hit OK/Apply at the button to save the changes. You can recreate the same private-browsing-shortcut effect with other major web browsers too. Repeat shortcut editing steps we highlighted above, but change out the -incognito with -private (for Firefox and Internet Explorer) and -newprivatetab (for Opera). With just a simple command line switch applied, you can now launch a lightweight single browser window for those quick web searches without having to stop your game and load up all your saved tabs. Have a pressing tech question? Email us at [email protected] and we’ll do our best to answer it.

    Read the article

  • Advantages of Hudson and Sonar over manual process or homegrown scripts.

    - by Tom G
    My coworker and I recently got into a debate over a proposed plan at our workplace. We've more or less finished transitioning our Java codebase into one managed and built with Maven. Now, I'd like for us to integrate with Hudson and Sonar or something similar. My reasons for this are that it'll provide a 'zero-click' build step to provide testers with new experimental builds, that it will let us deploy applications to a server more easily, that tools such as Sonar will provide us with well-needed metrics on code coverage, Javadoc, package dependencies and the like. He thinks that the overhead of getting up to speed with two new frameworks is unacceptable, and that we should simply double down on documentation and create our own scripts for deployment. Since we plan on some aggressive rewrites to pay down the technical debt previous developers incurred (gratuitous use of Java's Serializable interface as a file storage mechanism that has predictably bit us in the ass) he argues that we can document as we go, and that we'll end up changing a large swath of code in the process anyways. I contend that having accurate metrics that Sonar (or fill in your favorite similar tool) provide gives us a good place to start for any refactoring efforts, not to mention general maintenance -- after all, knowing which classes are the most poorly documented, even if it's just a starting point, is better than seat-of-the-pants guessing. Am I wrong, and trying to introduce more overhead than we really need? Some more background: an alumni of our company is working at a Navy research lab now and suggested these two tools in particular as one they've had great success with using. My coworker and I have also had our share of friendly disagreements before -- he's more of the "CLI for all, compiles Gentoo in his spare time and uses Git" and I'm more of a "Give me an intuitive GUI, plays with XNA and is fine with SVN" type, so there's definitely some element of culture clash here.

    Read the article

  • Unavailable packages repository

    - by bitmask
    I'm running ubuntu 11.10 (oneiric) on this machine, and suddenly, apt is unable to update properly. If I ask it to update its package information, by running apt-get update (or alternatively telling the update manager to "check"), it succeeds for about 120 packages (more precisely, I get about 120 Ign/Hit notes) and then says it cannot find universe Sources and restricted amd64: Hit http://de.archive.ubuntu.com oneiric-backports/multiverse Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/universe Translation-en Err http://de.archive.ubuntu.com oneiric/universe Sources 404 Not Found [IP: 141.30.13.20 80] Err http://de.archive.ubuntu.com oneiric/restricted amd64 Packages 404 Not Found [IP: 141.30.13.20 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/universe/source/Sources 404 Not Found [IP: 141.30.13.20 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/restricted/binary-amd64/Packages 404 Not Found [IP: 141.30.13.20 80] E: Some index files failed to download. They have been ignored, or old ones used instead. I manually checked the de server and cannot find anything wrong with the stuff it's complaining about. Also it looks pretty much like, say, the us mirror. But oddly enough, the IP it lists, seems to point to a debian package server, which obviously does not contain ubuntu packages. So, is this a local problem that I can fix somehow (and if so, how?) or is there actually some server down right now?

    Read the article

  • Google Blogger Website CName and/or Text File Issues

    - by Francis Gibbons
    I have a blogger Blog website and I would like to have it show up on my company website. I have read a couple articles out there on how to do it. A hand full of them talk about using FTP which is old and no longer available. However, I am trying to following along with this one: http://www.infinite42.com/small-business/integrate-blogger-blog-website Which seems pretty easy but I am having a problem getting Google to Verify the DNS CName or Text Record that I created on my Windows 2007 Server. Do I need to create this record at the registra level. Right now the domain is setup at the registra to point the www record to my server where on my server I tried the Txt Record and the CName Record with no luck in DNS. Here are the Google instructions for creating a CName file record in DNS: Follow the steps below to create a DNS (Domain Name System) record that proves to Google that you own the domain. Add the CNAME record below to the DNS configuration for abc.com. CNAME Label / Host: CNAME Destination / Target: Click Verify below. When Google finds this DNS record, we'll make you a verified owner of the domain. (Note: DNS changes may take some time. If we don't find the record immediately, we'll check for it periodically.) To stay verified, don't remove the DNS record, even after verification succeeds. Here is the link to do it with a CName: http://googlewebmastercentral.blogspot.com/2012/08/domain-verification-using-cname-records.html When I go to add my CName record on my server's DNS the only two fields available are Alias Name and Fully Qualified Domain Name. How am I suppose to create this record can someone please tell me? Thanks, Frank

    Read the article

  • Software for scanning installation process?

    - by no name
    I forgot what is the name of the very good software which make some kind of restore point (save registry, and program files folder, etc...) before installing any software. After you install some program (ie "Notepad++") you can easily see what registry data use new installed program, on what location are the file's is stored and many more. The reason that I'm asking for help is that I have to automate some installation of public software, so after installation I need to uninstall it, so i have to delete all junk files. The software is called something like install wizard, or wizard install I forgot. If you have any idea of other application that do same thing , or you know exactly name of that software, or you have some good idea how to solve easy solve many installation and uninstallation, please let me know. Os: win7/xp 64/32

    Read the article

  • Should I ditch a creative pet project in lieu of one that would demonstrate skills more applicable to an employer?

    - by Hart Simha
    I am currently working on a project on github that I think would be a good demonstration of my initiative, creativity and enthusiasm. It is an educational game I am developing in pygame that enables the user to learn to improve their development productivity by using vim, specifically with python, though learning to code faster with vim should be transferable to any language. I think this is something that might have a mass appeal and benefit to a lot of people in a measurable way. -However- I am graduating from college in a month (my degree is computer science with a minor in english), with no experience that is relevant to helping me get any kind of job in the field, and a gpa that doesn't tout my merits. I could pursue a career in game development, but it's not necessarily what I'm most interested in, and see myself applying to startups around the country. To the places I am looking at applying, showing that I have experience with pygame is going to be largely irrelevant, except in demonstration of my ability to code, period. A lot of skills that ARE more marketable, such a data modeling, GIS, mobile development, javascript, .net framework, and various web development technologies, are not going to be showcased by this project (on the upside, employers do like to see familiarity with git and python). I'm wondering if I should sink all my free time in the next couple of months into this project, since I'm motivated and interested in it, and if the value of being able to demonstrate ambition and 'good ideas' (for lack of a better term, and in my own opinion) will compensate for the absence of demonstrating more sought-after skills. I am probably at a point where I should either commit fully to this project now, or put it on the backburner in favor of something else, and I am leaning towards continuing with what I am already working on, because I think it's a great idea, and something achievable to me with enough dedication over the next couple months. But the most important thing to me is being able to get a job out of college, which I am exceedingly concerned about as the professional landscape which I am navigating for the first time is a lot more intimidating than I could have anticipated, with almost every job (even short-term contract positions) requiring years of experience which I lack. Oh, and in case anyone is interested, my repository is here: www.github.com/hmsimha/vimagine

    Read the article

  • How to use BT or emule across 2 or more hard drives?

    - by the searcher
    One difficulty with BT or emule is that, when the hard drive is full, we constantly need to move older files to a new hard drive so that we can download newer files. We can change BT or emule's setting so that the folder for downloading points to the new hard drive, but then, what if emule haven't finished downloading for some files that are hard to find, and it is 92% done... in that case, we would like to keep the old setting so that when the last 8% arrives, it can go into the correct file. (and same for BT, if we haven't finished some file or if we want to seed something later). So is there a good way to let BT or emule point to 2 hard drives, or somehow let the new hard drive "merge" into the existing hard drive / folder?

    Read the article

  • 5 Ways to Celebrate the Release of Internet Explorer 9

    - by David Wesst
    The day has finally come: Microsoft has released a web browser that is awesome. On Monday night, Microsoft officially introduced the world to the latest edition to its product family: Internet Explorer 9. That makes March 14, 2011 (also known as PI day) the official birthday of Microsoft’s rebirth in the world of web browsing. Just like any big event, you take some time to celebrate. Here are a few things that you can do to celebrate the return of Internet Explorer. 1. Download It If you’re not a big partier, that’s fine. The one thing you can do (and definitely should) is download it and give it a shot. Sure, IE may have disappointed you in the past, but believe me when I say they really put the effort in this time. The absolute least you can do is give it a shot to see how it stands up against your favourite browser. 2. Get yourself an HTML5 Shirt One of the coolest, if not best parts of IE9 being released is that it officially introduces HTML5 as a fully supported platform from Microsoft. IE9 supports a lot of what is already defined in the HTML5 technical spec, which really demonstrates Microsoft’s support of the new standard. Since HTML5 is cool on the web, it means that it is cool to wear it too. Head over to html5shirt.com and get yourself, or your staff, or your whole family, an HTML5 shirt to show the real world that you are ready for the future of the web. 3. HTML5-ify Something Okay, so maybe a shirt isn’t enough for you. Maybe you need start using HTML5 for real. If you have a blog, or a website, or anything out there on the web, celebrate IE9 adding some HTML5 to your site. Whether that is updating old code, adding something new, or just changing your WordPress theme, definitely take a look at what HTML5 can do for you. 4. Help Kill Old IE and Upgrade your Organization See this? This is sad. Upgrading web browsers in an large enterprise or organization is not a trivial task. A lot of companies will use the excuse of not having the resources to upgrade legacy web applications they were built for a specific version of IE and it doesn’t render correctly in legacy browsers. Well, it’s time to stop the excuses. IE9 allows you to define what version of Internet Explorer you would like it to emulate. It takes minimal effort for the developer, and will get rid of the excuses. Show your IT manager or software development team this link and show them how easy it is to make old code render right in the latest and greatest from the IE team. 5. Submit an Entry for DevUnplugged So, you’ve made it to number five eh? Well then, you must be pretty hardcore to make it this far down the list. Fine, let’s take it to the next level and build an HTML5 game. That’s right. A game. Like a video game. HTML5 introduces some amazing new features that can let you build working video games using HTML5, CSS3, and JavaScript. Plus, Microsoft is celebrating the launch of IE9 with a contest where you can submit an HTML5 game (or audio application) and have a chance to win a whack of cash and other prizes. Head here for the full scoop and rules for the DevUnplugged. This post also appears at http://david.wes.st

    Read the article

  • OpenGL: Want to keep gun on top of car and be able to control angle. Having difficulties.

    - by Blair
    So I am making a simple game. I want to put a gun on top of a car so basically like a long rod in the middle of a black is how I am modelling it right now. I want to be able to control the angle of the gun. Basically it can go forward all the way so that it is parallel to the ground facing the direction the car is moving or it can point behind the car and any of the angles in between these positions. I have something like the following right now but its not really working. Is there an better way to do this that I am not seeing? #This will place the car glPushMatrix() glTranslatef(self.position.x,1.5,self.position.z) glRotated(self.rotation, 0.0, 1.0, 0.0) glScaled(0.5, 0.5, 0.5) glCallList(self.model.gl_list) glPopMatrix() #This will place the gun on top glPushMatrix() glTranslatef(self.position.x,2.5,self.position.z) glRotated(self.tube_angle, self.direction.z, 0.0, self.direction.x) print self.direction.z glRotated(45, self.position.z, 0.0, self.position.x) glScaled(1.0, 0.5, 1.0) glCallList(self.tube.gl_list) glPopMatrix() This almost works. It moves the gun up and down. But when the car moves around the angle of the gun changes. Not what I want.

    Read the article

  • What could be causing Windows to randomly reset the system time to a random time?

    - by Jonathan Dumaine
    My Windows 7 machine infuriates me. It cannot hold a date. At one point it all worked fine, but now it will decide that it needs to change the system time to a random time and date, either in the future or past. There seems to be no correlation or set interval of when it happens. In attempt to remedy this, I have: Correctly set the time in BIOS. Replaced the motherboard battery with a new CR2032 (even checked it with a multimeter). Tried disabling automatic internet synchronizing via "Date and Time" dialog. Stopped, restarted, left disabled the Windows Time service. Yet with all of these actions, the time will continue to change. Any ideas?

    Read the article

  • trying to install lync 2010 and experiecing an error with central management store

    - by Itai Ganot
    I'm trying to install Lync 2010 and i'm getting stuck in the stage where i have to install or point to the local configuration store. I've tried finding it in the domain and without luck, any recommendations? PS C:\Users\Administrator.ASUTA> Get-csconfigurationstorelocation WARNING: No Configuration Store location has been set. PS C:\Users\Administrator.ASUTA> Get-CsComputer "$env:computername.$env:userdnsd omain" Get-CsComputer : Cannot find location of Central Management Store in Active Dir ectory. At line:1 char:15 + Get-CsComputer <<<< "$env:computername.$env:userdnsdomain" + CategoryInfo : ResourceUnavailable: (:) [Get-CsComputer], Manag ementStoreNotFoundException + FullyQualifiedErrorId : ManagementStoreNotFound,Microsoft.Rtc.Management .Xds.GetComputerCmdlet PS C:\Users\Administrator.ASUTA>

    Read the article

  • Default virtual server does not work

    - by Luc
    Hello, I have 4 Name Virtual Hosts on my apache configuration, each one using proxy_http to forward request to the correct server. They work fine. <VirtualHost *:80> ServerName application_name.domain.tld ProxyRequests Off ProxyPreserveHost On ProxyPass / http://server_ip/ ProxyPassReverse / http://server_ip/ </VirtualHost> I then tried to add a default NameVirtualHost to take care of the requests for which the server name does not match one of the four others. Otherwise a request like some_weird_styff.domain.tld would be forwarded to one of the 4 VH. I then added this one: <VirtualHost *:80> ServerAlias "*" DocumentRoot /var/www/ </VirtualHost> At the beginning it seemed to work fine, but at some point it appears that the requests that should be handed by one of the 4 regular hosts is "eaten" by the default one !!! If I a2dissite this default host, everything is back to normal... I do not really understand this. If you have any clue... thanks a lot, Luc

    Read the article

  • help with Outlook Exchange server and curl

    - by stib
    I work on a mac in a building full of PCs, and the IT department here doesn't have IMAP access turned on on the exchange servers. So I miss a lot of meetings because I don't get reminders because I access my mail via Outlook Web access. I had written a script to scrape my Outlook Web Access calendar and turn it into iCal format, so I could get my reminders via thunderbird or iCal.app. It basically downloaded the calendar page via curl, parsed the HTML and reformatted all the appointments as ical. it wasn't elegant, but it worked. Then they changed to outlook 2007, and it doesn't work any more. I have a sketchy knowledge of curl, and almost zero knowledge of how outlook works. Can anyone point me towards a reference for getting calendar info out of an exchange server without using outlook? If I can configure curl to get the HTML I will be happy, but if there's a more elegant way, such as getting the calendar info as XML I'll be delirious.

    Read the article

  • Effective versus efficient code

    - by Todd Williamson
    TL;DR: Quick and dirty code, or "correct" (insert your definition of this term) code? There is often a tension between "efficient" and "effective" in software development. "Efficient" often means code that is "correct" from the point of view of adhering to standards, using widely-accepted patterns/approaches for structures, regardless of project size, budget, etc. "Effective" is not about being "right", but about getting things done. This often results in code that falls outside the bounds of commonly accepted "correct" standards, usage, etc. Usually the people paying for the development effort have dictated ahead of time what it is that they value more. An organization that lives in a technical space will tend towards the efficient end, others will tend towards the effective. Developers often refuse to compromise their favored approach for the other. In my own experience I have found that people with formal education in software development tend towards the Efficient camp. Those that picked up software development more or less as a tool to get things done tend towards the Effective camp. These camps don't get along very well. When managing a team of developers who are not all in one camp it is challenging. In your own experience, which camp do you land in, and do you find yourself having to justify your approach to others? To management? To other developers?

    Read the article

< Previous Page | 547 548 549 550 551 552 553 554 555 556 557 558  | Next Page >