Search Results

Search found 15863 results on 635 pages for 'made in india'.

Page 257/635 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Two Cloudy Observations from Oracle OpenWorld

    - by Gene Eun
    Now that the dust has settled from another amazing Oracle OpenWorld, I wanted to reflect back on a couple of key observations I made during the event. First, it was pretty clear that Cloud was again a big deal at this year's conference. It was hard to not notice that Oracle continues to be "all-in" with respect to cloud computing. Just to give you an idea of the emphasis on Cloud, there were over 300 Cloud-related sessions at this year's OpenWorld. If you caught some of the demo booths in the Oracle Red Lounge, then you saw some of the great platform, application, and social services that are now part of Oracle Cloud, as well as numerous demos of private cloud products that Oracle offers. Second, during Thomas Kurian's keynote presentation on Oracle Cloud, he announced the Preview Availability of a new service called Oracle Developer Cloud Service. This new platform service will provide developers with instant access to environments to better manage the application development lifecycle in the cloud. It provides development project teams access to favorite tools like Hudson, Git, Github, wikis, and tasks to help make innovation faster, more collaborative, and more effective. There's also integration with IDEs like Eclipse, NetBeans, and JDeveloper. If you're a developer, it's an awesome addition to Oracle Cloud's platform services! Want more details about Oracle Developer Cloud Service? Click here.

    Read the article

  • How to access website on localhost from internet?

    - by Nitesh Panchal
    Hello, I turned my pc into router by allowing port forwarding on port 80 and got my hostname(xyz.donexist.org) registered through dyndns.com. Now i when i type my public ip address in browser i get redirected to my browser. I have installed glassfish and have my website deployed in glassfish. I want that when i type xyz.donexist.org my website should be opened. What all steps more i need to take? I have made a entry in etc/hosts file as :- 127.0.0.1 xyz.donexist.org Please guide me. I am and beginner. Thanks in advance :)

    Read the article

  • firefox raided?

    - by dschinn1001
    ubuntu 12.10 with firefox 17.0 firefox is now suddenly not starting any more ? (since I tried something with virtualbox and win7 firefox seems to be raided somehow ? - then after next boot virtualbox with win7 did not work any more and had message like here: Premature end of data in tag VirtualBox line 8. Location: '/home/$user/VirtualBox VMs/V12/V12.vbox', line 86 (8), column 2. /home/vbox/vbox-4.2.4/src/VBox/Main/src-server/MachineImpl.cpp[724] (nsresult Machine::registeredInit()). Fehlercode: NS_ERROR_FAILURE (0x80004005) Komponente: Machine Interface: IMachine {22781af3-1c96-4126-9edf-67a020e0e858} ) this happened all - after win7 made an update and virtual hard-disk with size of normally 25 Gibi exceeded to 26.8 Gibi I plugged (when I created virtualbox with win7) usb-hard-disk with 500 Gibi, but somehow win7 was not installed on disk with 500 Gibi - instead only virtual-harddisk with default 25 Gibi was created - and smaller harddisk with maximum of 120 Gibi was restless stuffed out. This created then the problem with "raided" firefox and firefox cannot start any more. After I delelted virtualbox with win7 - there are 12 Gibi free space back on the smaller harddisk - but firefox remains after reboot out-of-function and does not start.

    Read the article

  • Why do some bad websites rank well?

    - by BradB
    Consider the following scenario: you are pitching SEO/Website Optimisation to a prospective client and you explain to them the importance of great copy and content, how acquiring links (ethically) can increase page rank, why the quality of the HTML build matters (H1, H2 tags, w3c validation etc), why keyword research is beneficial, you may drop in a few Google Webmaster Guideline or Matt Cutts references to back up your claims and rubbish the "back hat" approach as being no longer effective for good measure. Your advice is ethical and in the eyes of best practices, spot on. Then, the client points out to you some of their long established competitors on Google and you see these competitor websites ranking in the top spots (1 to 3) for medium to highly competitive search phrases that your client wants to compete for. These websites totally contradict your ethical approach and pretty much violate every best practice previously noted. They even out perform other "white hat" competitors who are in accordance with the above guidelines. I experienced this today. One of these well ranking websites had: About six microsites with more or less the same copy and a slightly varied layout Little or not textual content I would almost say duplicate content across the sites, but there was so little of it it could barely qualify for being duplicate All the content in Flash (with a music track that kicked in on each page load, not so much of an SEO issue - but it helps paint the picture) Keyword stuffing behind the Flash file with a bunch of black text on black background in the style of keyword 1 keyword 2,keyword1,keyword 2,keyword 2 keyword 3 and so on... The exact keyword stuffed combination present on every page of the website A bunch of clearly self made links from poor quality forums and directories with little or no Page Rank Links exchanged across the microsites How do you explain your way out of this when this hard evidence is sat in front of you undermining your great pitch?

    Read the article

  • Can I use PLink and Pageant with Cygwin's ssh?

    - by Jerph
    I'm now using msysgit because of the GUI tools, which use Putty's Pageant and PLink utilities, but I use Cygwin as a general SSH terminal. I had been using ssh-agent on Cygwin, but that means I have to enter my SSH key passphrases for both SSH key managers. Is it possible to configure all my Unix-port tools (msys, git, cygwin, Ruby Net:SSH, etc.) to use PLink/Pageant instead of ssh-agent? It seems that's the kind of thing PLink was made for, but I can't find documentation on how.

    Read the article

  • Application shortcut reappears on restart

    - by Nathan Friesen
    I have an application that I have built a .msi installer for throgh Microsoft Visual Studio 2010. I recently made some updates, including changing the version number and rebuilt the installer with these updates. The installer includes shortcuts on both the desktop and in the Start menu. Running the installer appears to work fine, and both of these shortcuts work. After restarting my computer I've found that the shortcuts are changed to have a Target type of Application (Installs on first use) and the Start In: field is changed to a location that doesn't exist. Once this happens, every time you use that shortcut it tries to install the application again and fails. I have also changed the name of the shortcut that the installer creates. This appears to work, and the shortcut still works after a restart. After the restart, though, the shortcut with the old name that doesn't work also appears on the desktop and in the Start menu. Does anyone have any ideas what I may have set up wrong, or what I need to change to get the shortcuts to be have properly?

    Read the article

  • What WYSIWYG software can I use to create a web page?

    - by Roman
    I always made web pages by writing HTML code but now I would like to try to use some WYSIWYG approach. Can anybody recommend me a program which I can use for that? I mean a program in which you can move buttons, tables, pictures by mouse. You can change size and shape by mouse. You can use nice templates for "block of text", buttons, background and so on. I am using Windows 7. May be I already have something pre-installed?

    Read the article

  • Welcome to the Red Gate BI Tools Team blog!

    - by Red Gate Software BI Tools Team
    Welcome to the first ever post on the brand new Red Gate Business Intelligence Tools Team blog! About the team Nick Sutherland (product manager): After many years as a software developer and project manager, Nick took an MBA and turned to product marketing. SSAS Compare is his second lean startup product (the first being SQL Connect). Follow him on Twitter. David Pond (developer): Before he joined Red Gate in 2011, David made monitoring systems for Goodyear. Follow him on Twitter. Jonathan Watts (tester): Jonathan became a tester after finishing his media degree and joining Xerox. He joined Red Gate in 2004. Follow him on Twitter. James Duffy (technical author): After a spell as a writer in the video game industry, James lived briefly in Tokyo before returning to the UK to start at Red Gate. What we’re working on We launched a beta of our first tool, SSAS Compare, last month. It works like SQL Compare but for SSAS cubes, letting you deploy just the changes you want. It’s completely free (for now), so check it out. We’re still working on it, and we’re eager to hear what you think. We hope SSAS Compare will be the first of several tools Red Gate develops for BI professionals, so keep an eye out for more from us in the future. Why we need you This is your chance to help influence the course of SSAS Compare and our future BI tools. If you’re a business intelligence specialist, we want to hear about the problems you face so we can build tools that solve them. What do you want to see? Tell us! We’ll be posting more about SSAS Compare, business intelligence and our journey into BI in the coming days and weeks. Stay tuned!

    Read the article

  • Welcome to the Red Gate BI Tools Team blog!

    - by BI Tools Team
    Welcome to the first ever post on the brand new Red Gate Business Intelligence Tools Team blog! About the team Nick Sutherland (product manager): After many years as a software developer and project manager, Nick took an MBA and turned to product marketing. SSAS Compare is his second lean startup product (the first being SQL Connect). Follow him on Twitter. David Pond (developer): Before he joined Red Gate in 2011, David made monitoring systems for Goodyear. Follow him on Twitter. Jonathan Watts (tester): Jonathan became a tester after finishing his media degree and joining Xerox. He joined Red Gate in 2004. Follow him on Twitter. James Duffy (technical author): After a spell as a writer in the video game industry, James lived briefly in Tokyo before returning to the UK to start at Red Gate. What we're working on We launched a beta of our first tool, SSAS Compare, last month. It works like SQL Compare but for SSAS cubes, letting you deploy just the changes you want. It's completely free (for now), so check it out. We're still working on it, and we're eager to hear what you think. We hope SSAS Compare will be the first of several tools Red Gate develops for BI professionals, so keep an eye out for more from us in the future. Why we need you This is your chance to help influence the course of SSAS Compare and our future BI tools. If you're a business intelligence specialist, we want to hear about the problems you face so we can build tools that solve them. What do you want to see? Tell us! We'll be posting more about SSAS Compare, business intelligence and our journey into BI in the coming days and weeks. Stay tuned!

    Read the article

  • Dell V105 and 10.6

    - by James
    Hello, I am trying to get a Dell V105 printer to work with my iMac. It is running 10.6 and I have installed all the drivers on the Snow Leopard disk. I believe there is no official Dell driver but I have also heard the Dell printers are made by third parties that may sell the same printer under different names. There a quite a few Lexmark models that seem to look very similar and I have tried setting the driver to the 2600 series that looks very similar but that has not enabled me to print. Although it claims to give me the supply levels correctly. Can anyone help? Thanks for your time

    Read the article

  • Website restyle, SEO migration plan?

    - by Goboozo
    I am currently in a project for one of my biggest clients. We have built a website that will -replace- the old website. When it comes to actual content its is largely the same. However, the presentation of the content has changed drastically. From our point of view much more user-friendly (main reason to update the site). Now, since the sites presentation has changed we have some major changes in: HTML & CSS: To change the presentation of the content URL's: To make them better understandable (301 redirects have been taken care of and are in place) Breadcrumbs: To enhance the navigation (we have made the breadcrumbs match exactly with the url's) Pagination: This was added to enable content browsing Title tags: Added descriptive title tags to the major links and buttons. Basically all user content including meta tags have remained the same. Now since this company is rather successful and 90% of its clients come from Google's organic results I am obliged to take all necessary precautions. People tell me I need a migration plan to prevent the site being hurt in Google, but I have never worked using such a plan... ...So, based on the above. Would you consider a migration plan necessary and what precautions/actions would you recommend to prevent us being put down in our SERP positions? Many thanks in advance for your answers.

    Read the article

  • JQuery / JSON + .Net Service Layer - to WCF or Not to WCF?

    - by hanzolo
    I Recently had a discussion with a colleague of mine about the pros / cons of WCF. He mentioned about how much code is generated to support WCF, and also the overhead required. It was mentioned that a simple jQuery /Ajax post to a .aspx page (or a handler for that matter) that returns JSON would work more efficiently and takes much less code to implement. I am also aware of the new WCF Web API and feel that technology may solve the "bloated"-ness required in attaining a proxy etc... by just outputting JSON. So when developing a relational DB (MSSQL) storage model, with a fairly complex Business Layer (C#) and Data Access Layer (EntityFW).. what's a good technology for creating a "service layer" which will spit out View Models represented in JSON, with a CQRS(Command Query..) approach in mind.. The app would use the service layer to support it's required UI, as well as provide an available subset of services (outputting JSON data) for service subscribers.. In other words an admin panel to support the admin UI, and service endpoints that return JSON to access the configurations made from the administration UI. What are some potential technologies to use as the transport / communication layer. I'd like to use a pure RESTful approach, but am not against doing some URL rewriting with IIS. Obviously some of the available technologies are: WCF WCF Web API (should this even be separate?) Straight request / response (query string to .aspx / handler) Would using MVC .Net solve this entire problem? maybe their single page app approach? any suggestions / feedback from developing this type of application? Thanks,

    Read the article

  • How can email possibly be routed to the right place with no to: address?

    - by agent154
    I'm no novice on networking technology, but one thing I don't really know much about in detail is email and headers. How does email work SPECIFICALLY? I'm getting spam in my hotmail inbox when I've made painful attempts to not give out my actual email. I use my own domain name to forward email to my inbox using several aliases. Yet now I'm getting spam with no address in the to: line, or also "undisclosed recipients". Looking at the headers is of no help whatsoever. So from a technical standpoint, I have to wonder... if I send an email to a certain address in my personal domain and it gets forwarded to my hotmail account, how does hotmail know what inbox to dump the message in if that address is not listed in the headers?

    Read the article

  • ArchBeat Link-o-Rama for 11/30/2011

    - by Bob Rhubart
    Coding - the new Latin | @BBCRoryCJ BBC Technology Correspondent Rory Cellan-Jones reports on why "the campaign to boost the teaching of computer skills - particularly coding - in schools is gathering force." BPM Business Value Patterns | SOA Partner Community Blog Juergen Kress shares the presentation he and Matthias Ziegler from Accenture delivered at the SOA & BPM Integration Days event in Germany in October. Coherence 3.7.1 Resources Busy blogger Juergen Kress shares links to screencasts and other resources for those interested in Oracle Coherence 3.7.1. OBIEE 11.1.1 - Introduction to OBIEE 11g Full Sample App "The OBIEE 11g Full Sample App (FSA) is a comprehensive collection of examples designed to demonstrate the latest Oracle BIEE 11g capabilities and design best practices." Solaris 11 Customer Maintenance Lifecycle | Gerry Haskins Gerry Haskins launches a new blog devoted to Solaris "policies, best practices, clarifications, and lots of other stuff." Harnessing Business Events for Predictive Decision Making - part 1 / 3 | Sanjeev Sharma "Data growth is outpacing storage capacity by a factor of two and computing power is still very much bounded by Moore's Law, doubling only every 18 months," says Sanjeev Sharma. The Latest Research from the SEI | Douglas C. SchmidtSchmidt shares information on several recently published Software Engineering Institute (SEI) technical reports that "highlight the latest work of SEI technologists in Agile methods, insider threat,the SMART Grid Maturity Model, acquisition, and CMMI." Tiger/Line Shape Files and Oracle | Bradley D. Brown "Have you ever needed to load an ESRI "shape file" and wondered if that's an easy effort or a difficult effort? I know I have and I assumed that it was a pretty difficult effort. However, I learned today that's actually pretty easy!" -- Oracle ACE Director Bradley Brown of TUSC. Webcast: Enterprise Clouds with Oracle VM Tuesday, December 6, 2011, 9:00 am PT / Noon ET. Featuring Adam Hawley (Senior Director of Product Management, Oracle) and Dan Herrup (Principal Systems Engineer, Oracle Corporate Citizenship). SOA Made Simple; Architects in AZ; Cloud Migration Introduction This week on the Architect Home Page on OTN.

    Read the article

  • Administrator File Modification Privilege

    - by Leigh Riffel
    Windows Server 2008 apparently allows an application to somehow configure the folder so that any changes made within the folder require administrator level access. I login with an account that has administrator privileges, but is not the local administrator account. When I do so I find that I can't save changes to files opened within this folder. I know I can open the application as administrator or move the file out of the folder, make the change, then move it back in, but I'm hoping there is a better way short of disabling the protection entirely. Is there a way perhaps to remove it for the files I frequently edit?

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • Virtual box - How to add disks and move var, opt and home to them?

    - by Jarrod Roberson
    I created a CentOS 5.6 Guest OS Virtual Machine. I made the first disk 10GB, I am rapidly outgrowing it. It was suggested that I make disks for my /var, /opt and /home directories and move them so I can better manage the disks for backing up and what not. This sounds like a good idea. I know how to create the disks in Virtual Box. I have dug around Google and the internet in general and all my attempts at doing this have failed. Snapshots are awesome! I can get the drives fdisked, and I have had limited success mounting them to /mnt/var, /mnt/home and /mnt/opt, but even in single user mode ( init 1 ) I can't get the entire contents of the directories to move over, and then the machine won't reboot correctly. cd /var cp * -ax /mnt/var The /var directory in particular is not wanting to move everything to the new location. How do I format, mount and move the /var, /opt and /home to my new disks?

    Read the article

  • How can I send an email from Mail.app to Outlook with an attachment that does not embed into the email body?

    - by JAG2007
    I'm using Mail.app (on Mac OS X 10.6) and when I send an email to users on PC Outlook, with an attached image, they get the email as an image embedded into the body, not as an attachement. I even tried clicking "view as icon" before sending the attachment from Macmail, but that made no difference. I also tried this myself, sending from Mail.app over to my PC's Outlook, and I do get that same problem. In Outlook the image is not coming through as an attachment, but as an image embedded into the body of the email. The reason this is an issue primarily is because the user is then unable to click "save as" and has to actually copy and paste it into some other program, which means the file is converted from jpg or png to the bmp format. But beyond that, most of my recipients don't even know how to copy and paste it into another program to save it that way anyway. They need the "save attachment as" functionality.

    Read the article

  • An Oracle decade

    - by Jürgen Kress
    Almost 10 years with Oracle, jointly we have build an Oracle SOA economy with thousands of SOA consultants and millions of revenue in services and license. The SOA Partner Community started in Europe and grew around the world.  Since March 2007 we distribute our monthly SOA Partner Community newsletter with the latest updates around SOA.  In 2010 we add web2.0 features like twitter, wiki , mix and delicious to the community. The active SOA Partner Community made us the most successful middleware Specialization.Thanks to our ACE Directors and Clemens we host jointly we our product management team regular Partner Advisory Councils. Not to forget all the superb events with Thomas Erl like the SOA Symposium and the Community Forum in Copenhagen. Thanks for all your contributions and support! what’s next? See you one more time at the SOA Partner Community Forum 2011 Wish you all a great start in 2011 Jürgen Kress For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Community,Oracle,SOA,SOA Partner Community,Oracle decade,Jürgen Kress,OPN,Specialization,Thomas Erl,ACE,SOA Partner Community Forum,SOA Community Forum

    Read the article

  • My (C:) drive changed from Basic to Dynamic, is this bad?

    - by bbman225
    I'm really worried here. My computer still runs, so I take this as a good sign, however let me explain my situation: I am trying to install Ubuntu Linux, and the installer was having problems, so I went back into the partitioning tool on my Windows 7 (after having successfully shrunken my C drive and created 55 GB unallocated space) and I attempted to create a new partition out of the 55 GB and make it a simple NTFS drive so that I could let the installer wipe it clean again and format it in whatever file system it prefers. Now, after googling it and running through the process I noticed that all of my drives, including the C drive and the one I just made changed from type "Basic" to type "Dynamic." What is a dynamic drive and should I be worried?

    Read the article

  • Our winners- and some BBQ for everyone

    - by Steve Tunstall
    Congrats to our two winners for the first two comments on my last entry. Steve from Australia and John Lemon. Steve won since he was the first person over the International Date Line to see the post I made so late after a workday on Friday. So not only does he get to live in a country with the 2nd most beautiful women in the world, but now he gets some cool Oracle Swag, too. (Yes, I live on the beach in southern California, so you can guess where 1st place is for that other contest…Now if Steve happens to live in Manly, we may actually have a tie going…) OK, ok, for everyone else, you can be winners, too. How you ask? I will make you the envy of every guy and gal in your neighborhood or campsite. What follows is the way to smoke the best ribs you or anyone you know have ever tasted. Follow my instructions and give it a try. People at your party/cookout/campsite will tell you that they’re the best ribs they’ve ever had, and I will let you take all the credit. Yes, I fully realize this post is going to be longer than any post I’ve done yet. But let’s get serious here. Smoking meat is much more important, agreed? J In all honesty, this is a repeat of another blog I did, so I’m just copying and pasting. Step 1. Get some ribs. I actually really like Costco’s pack. They have both St. Louis and Baby Back. (They are the same ribs, but cut in half down the sides. St. Louis style is the ‘front’ of the ribs closest to the stomach, and ‘Baby back’ is the part of the ribs where is connects to the backbone). I like them both, so here you see I got one pack of each. About 4 racks to a pack. So these two packs for $25 each will feed about 16-20 of my guests. So around 3 bucks a person is a pretty good deal for the best ribs you’ll ever have. Step 2. Prep the ribs the night before you’re going to smoke. You need to trim them to fit your smoker racks, and also take off the membrane and add your rub. Then cover and set in fridge overnight. Here’s how to take off the membrane, which will not break down with heat and smoke like the rest of the meat, so must be removed. Use a butter knife to work in a ways between the membrane and the white bone. Just enough to make room for your finger. Try really hard not to poke through the membrane, you want to keep it whole. See how my gloved fingers can now start to lift up and pull off the membrane? This is what you are trying to do. It’s awesome when the whole thing can come off at once. This one is going great, maybe the best one I’ve ever done. Sometime, it falls apart and doesn't come off in one nice piece. I hate when that happens. Now, add your rub and pat it down once into the meat with your other hand. My rub is not secret. I got it from my mentor, a BBQ competitive chef who is currently ranked #1 in California and #3 in the nation on the BBQ circuit. He does full-day classes in southern California if anyone is interested in taking his class. Go to www.slapyodaddybbq.com to check him out. I tweaked his run recipe a tad and made my own. It’s one part Lawry’s, one part sugar, one part Montreal Steak Seasoning, one part garlic powder, one-half part red chili powder, one-half part paprika, and then 1/20th part cayenne. You can adjust that last ingredient, or leave it out. Real cheap stuff you can get at Costco. This lets you make enough rub to last about a year or two. Don’t make it all at once, make a shaker’s worth and use it up before you make more. Place it all in a bowl, mix well, and then add to a shaker like you see here. You can get a shaker with medium sized holes on it at any restaurant supply store or Smart & Final. The kind you see at pizza places for their red pepper flakes works best. Now cover and place in fridge overnight. Step 3. The next day. Ok, I’m ready to go. Get your stuff together. You will need your smoker, some good foil, a can of peach nectar, a bottle of Agave syrup, and a package of brown sugar. You will need this stuff later. I also use a clean spray bottle, and apple juice. Step 4. Make your fire, or turn on your electric smoker. In this example I’m using my portable charcoal smoker. I got this for only $40. I then modified it to be useful. Once modified, these guys actually work very well. Trust me, your food DOES NOT KNOW how expensive your smoker is. Someone who tells you that you need to spend a bunch of money on a smoker is an idiot. I also have an electric smoker that stays in my backyard. It’s cleaner and larger so I can smoke more food. But this little $40 one works great for going camping. Here is what my fire-bowl looks like. I leave a space in the middle open, and place cold charcoal and wood chucks in a circle going outwards. This makes it so when I dump the hot coals down the middle, they will slowly burn outwards, hitting different wood chucks at different times, allowing me to go 4-5 hours without having to even touch my fire. For ribs, I use apple and pecan wood. Pecan works for anything. Apple or any fruit wood is excellent for pork. So now I make my hot charcoal with a chimney only about half-full. I found a great use for that side-burner on my grill that I never use. It makes a fantastic chimney starter. You never use fluids of any kind, nor ever use that stupid charcoal that has lighter fluid built into it. Never, ever, ever. Step 5. Smoke. Add your ribs in the racks and stack them up in your smoker. I have a digital thermometer on a probe that I use to keep track of the temp in the smoker. I just lay the probe on the top rack and shut the lid. This cheap guy is a little harder to maintain the right temperature of around 225 F, so I do have to keep my eye on it more than my electric one or a more expensive charcoal one with the cool gadgets that regulate your temp for you. Every hour, spray apple juice all over your ribs using that spray bottle. After about 3 hours, you should have a very good crust (called the Bark) on your ribs. Once you have the Bark where you want it, carefully remove your ribs and place them in a tray. We are now ready for a very important part to make the flavor. Get a large piece of foil and place one rib section on it. Splash some of the peach nectar on it, and then a drizzle of the Agave syrup. Then, use your gloved hand to pack on some brown sugar. Do this on BOTH sides, and then completely wrap it up TIGHT in the foil. Do this for each rib section, and then place all the wrapped sections back into the smoker for another 4 to 6 hours. This is where the meat will get tender and flavorful. The first three hours is only to make the smoke bark. You don’t need smoke anymore, since the ribs are wrapped, you only need to keep the heat around 225 for the next 4-6 hours. Obviously you don’t spray anymore. Just time and slow heat. Be patient. It’s actually really hard to overdo it. You can let them go longer, and all that will happen is they will get even MORE tender!!! If you take them out too soon, they will be tough. How do you know? Take out one package (use long tongs) and open it up. If you grab a bone with your tongs and it just falls apart and breaks away from the rest of the meat, you are done!!! Enjoy!!! Step 6. Eat. It pulls apart like this when it’s done. By the way, smoking tri-tip is way easier. Just rub it with the same rub, and put in your smoker for about 2.5 hours at 250 F. That’s it. Low-maintenance. It comes out like this, with a fantastic smoke ring and amazing flavor. Thanks, and I will put up another good tip, about the ZFSSA, around the end of November. Steve 

    Read the article

  • How do I work around sudo 'segmentation fault' on basic bash commands?

    - by sage
    I am sure the answers are out there, but alas there are too many answers (here and elsewhere) to other questions stopping me from finding them. I just encountered something substantially similar to what is described at the closed SO question, sudo : “segmentation fault” Ubuntu maverick [closed]. My team is using Ubuntu 11.04 on VMWare Workstation 8.0.4. We are doing development using c++, Xenomai, Qt, and Qt Creator. When we simulate our application on the virtual machine, we currently need to launch Qt Creator with sudo. My colleague mentioned today that he has been having issues where his workstation locks up and he needs to restart and that occasionally he has the issue that all sudo bash commands return "segmentation fault". I just ran our application in simulation mode. I was running Qt Creator under sudo and Qt Creator received the signal abort (if I recall). Afterward, every command executed with sudo from sudo qtcreator to sudo ls resulted in the message Segmentation fault. I clicked on the power widget to see if I could log out, but the system shut down straightaway without prompting. My understanding is that we run sudo because of a permissions issue with Xenomai and the VM as currently configured, but my colleague has a workaround for this. I expect that not running Qt Creator under sudo -- something that has always made me nervous -- will help contain this issue, but I find it troubling that this could happen and manifest as it does. Does anyone know what is happening? Any recommendations on how to work around this issue? This is happening often to I am trying tolobby for VM changes to be able to run the process without sudo.

    Read the article

  • Where could Distributed Version Control Systems currently be in Gartner's hype cycle?

    - by dukeofgaming
    Edit: Given the recent downvoting (+8/-6 at this point) it was made clear to me that Gartner's lifecycle is a biased metric from a programmer's perspective. This is something that is part of a paper I'm going to present to management, and management types are part of Gartner's audience. Giving DVCS exposure & enthusiasm (that "could" be deemed as hype, or at least attacked as such), think about the following question when reading this one: "how could I use Gartner's hype cycle to convince management that DVCSs are ready (or ready-enough) for us, and that it is not just hype" Just asking if DVCSs is hype wouldn't be constructive, Gartner's hype cycle is a more objective instrument than just asking that (even if this instrument is regarded as biased). If you know any other instrument please, by all means, mention it. Edit #2: I agree that Gartner's Life Cycle is not for every technology, but I consider it may have generated enough buzz to be considered hype by some, so it maybe deserves to be at least evaluated/pondered as such by using this instrument in order to prove/disprove it to whatever degree. I'm an advocate of DVCS, BTW. I'm doing research for a whitepaper I'm writing in favor of DVCS adoption at company and I stumbled upon the concept of social proof. I want to prove that the social proof of DVCS adoption is not necessarily cargo cult and doing further research I now stumbled upon Gartner's hype cycle which describes technology maturity in 5 phases. My question is: what could be an indicator of the current location of Distributed Version Control Systems (I mean git, mercurial, bazaar, etc. in general) at a particular phase in the hype cycle?... in other (less convoluted) words, would you say that currently expectations of DVCSs are a) starting, b)inflated, c)decreasing (disillusionment), d)increasing (enlightenment) or e)stabilizing (mature) and (more importantly) why? I know it is a hard question and there is subjectivity involved, but I'll grant the answer (and the traditional cookie) to the clearest argument/evidence for a particular phase.

    Read the article

  • IIS cache control header settings

    - by a_m0d
    I'm currently working on a website that is accessed over https. We have recently come across a problem where we are unable to view .pdf files or any other type of file that is sent as an attachment (Content-Disposition:attachment). According to Microsoft Knowledge Base this is due to the fact that Cache-Control is set to no-cache. However, we have a requirement that all pages be fully reloaded every time they are visited, so we have disabled caching on all pages (through our ASP code, not through IIS settings). However, I have made a special case of this one page that shows the attachment, and it now returns a header with Cache-Control:private and the expiry set to 1 minute in the future. This works fine when I test it on my local machine, using https. However, when I deploy it to our test server and try it, the response headers still return Cache-Control:no-cache. There is no firewall or anything between me and the server, so IIS itself must be adding these headers and replacing mine. I have no idea why it would do this, and it doesn't really make any sense, but it seems to be the only option at the moment (I haven't yet found any other place in the code that will change the cache headers). Can anyone point me to a possible place where IIS might be setting these header values?

    Read the article

  • Server 2003 - Default installation of SharePoint throws javascript error 'undefined' is null or not an object

    - by Chris
    I have been handed a server that has a fresh installation of small business server 2003. SharePoint has been installed, and it is a completely fresh copy (no changes made) I know for a fact that everything was paid for so this isn't some dodgy copy. When I try to click any option in any of the floating menu's in IE 8, the browser asks me if I want to debug .. Error: 'undefined' is null or not an object for example, I got to Projects - and click on the sample project, then click delete on the floating menu that pops up and the error appears. This is brand new, I cannot believe I am having this problem! Has anyone come across this, is it an old problem with easy fix? All windows updates installed - see no updates for sharepoint? EDIT: This is WSS2 so I am going to upgrade to WSS3... http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=14117 http://www.techrepublic.com/article/installing-windows-sharepoint-services-30-on-windows-server-2003/6176833 Hopefully this will resolve the problem

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >