Search Results

Search found 46413 results on 1857 pages for 'system speech recognition'.

Page 756/1857 | < Previous Page | 752 753 754 755 756 757 758 759 760 761 762 763  | Next Page >

  • How do I provide dpkg configuration parameters to aptitude or apt-get?

    - by troutwine
    When installing gitolite I find that: # aptitude install gitolite The following NEW packages will be installed: gitolite 0 packages upgraded, 1 newly installed, 0 to remove and 29 not upgraded. Need to get 114 kB of archives. After unpacking 348 kB will be used. Get:1 http://security.debian.org/ squeeze/updates/main gitolite all 1.5.4-2+squeeze1 [114 kB] Fetched 114 kB in 0s (202 kB/s) Preconfiguring packages ... Selecting previously deselected package gitolite. (Reading database ... 30593 files and directories currently installed.) Unpacking gitolite (from .../gitolite_1.5.4-2+squeeze1_all.deb) ... Setting up gitolite (1.5.4-2+squeeze1) ... No adminkey given - not initializing gitolite in /var/lib/gitolite. The last line is of interest to me. If I run dpkg-reconfigure -plow gitolite I am presented with a dialog and can modify: the system user name for gitolite, the location of the gitolite repositories and provide the admin pubkey. I'd prefer to use the git system user and provide the admin pubkey on installation, say something of the sort: # aptitude install gitolite --user git --admin-pubkey 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDAc7kCAi2WkvqpAL1fK1sIw6xjpatJ+Ms2nrwLJPhdovEY3MPZF7mtH+rv1CHFDn66fLGiWevOFp...' That, of course, doesn't work. Can something similar be done? How do I determine the configuration parameters ahead of time? This would be remarkably useful, for instance, when installing gitolite automatically, via puppet or chef.

    Read the article

  • URL Rewriting on GoDaddy Virtual Server

    - by Aristotle
    I migrated a Kohana2 application from a shared-hosting environment over to a virtual dedicated server. After this migration, I can't seem to get my .htaccess file working again. I apologize up front, but over the years I have never experienced so much frustration with anything else as I do with the dreaded .htaccess file. Presently I have my project installed immediately within a directory in my public folder: /var/html/www/info.php (general information about server) /var/html/www/logo.jpg (some flat file) /var/html/www/somesite.com/[kohana site exists here] So my .htaccess file is within that directory, and has the following contents: # Turn on URL rewriting RewriteEngine On # Installation directory RewriteBase /somesite.com/ # Protect application and system files from being viewed # This is only necessary when these files are inside the webserver document root RewriteRule ^(application|modules|system) - [R=404,L] # Allow any files or directories that exist to be displayed directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # Rewrite all other URLs to index.php/URL RewriteRule .* index.php?kohana_uri=$0 [PT,QSA,L] # Alternativly, if the rewrite rule above does not work try this instead: #RewriteRule .* index.php?kohana_uri=$0 [PT,QSA,L] This doesn't work. The initial controller is loaded, since index.php is called up implicitly when nothing else is in the url. But if I try to load up some other non-default controller, the site fails. If I place the index.php back within the url, the call to other controllers works just fine. I'm really at my wits end, and would appreciate some direction here.

    Read the article

  • How to run some commands after booting from ArchLinux disk? Or how to change some settings in .iso before booting?

    - by Alexander Ovchinnikov
    How to install Arch Linux with traditional installer with only ssh-access to server? There is nice guide: https://wiki.archlinux.org/index.php/Install_from_SSH I try test this on my home vps: Start VPS with any linux bootable cd and login to remote server (vps) wget http://mirrors.kernel.org/archlinux/iso/latest/archlinux-2010.05-netinstall-x86_64.iso dd if=archlinux-2010.05-netinstall-x86_64.iso of=/dev/sda reboot ... I see, it works but without ssh connection... I need make script, which will send this commands after reboot: aif -p partial-configure-network (and write some information about my server ip etc.) /etc/rc.d/sshd start (need to start sshd) echo "sshd: ALL" /etc/hosts.allow (to allow me login to server, by default deny all) passwd (by default its empty, can't login via ssh with empty password) Can I edit .iso or may be /dev/sda? May be I need write script, which will start after system boot and do this things or may be I can set this settings by default and system will start with correct settings (i think its possible at least in 2. and 3.). Thank you!

    Read the article

  • Open Source but not Free Software (or vice versa)

    - by TRiG
    The definition of "Free Software" from the Free Software Foundation: “Free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.” Free software is a matter of the users' freedom to run, copy, distribute, study, change and improve the software. More precisely, it means that the program's users have the four essential freedoms: The freedom to run the program, for any purpose (freedom 0). The freedom to study how the program works, and change it to make it do what you wish (freedom 1). Access to the source code is a precondition for this. The freedom to redistribute copies so you can help your neighbor (freedom 2). The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this. A program is free software if users have all of these freedoms. Thus, you should be free to redistribute copies, either with or without modifications, either gratis or charging a fee for distribution, to anyone anywhere. Being free to do these things means (among other things) that you do not have to ask or pay for permission to do so. The definition of "Open Source Software" from the Open Source Initiative: Open source doesn't just mean access to the source code. The distribution terms of open-source software must comply with the following criteria: Free Redistribution The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale. Source Code The program must include source code, and must allow distribution in source code as well as compiled form. Where some form of a product is not distributed with source code, there must be a well-publicized means of obtaining the source code for no more than a reasonable reproduction cost preferably, downloading via the Internet without charge. The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed. Derived Works The license must allow modifications and derived works, and must allow them to be distributed under the same terms as the license of the original software. Integrity of The Author's Source Code The license may restrict source-code from being distributed in modified form only if the license allows the distribution of "patch files" with the source code for the purpose of modifying the program at build time. The license must explicitly permit distribution of software built from modified source code. The license may require derived works to carry a different name or version number from the original software. No Discrimination Against Persons or Groups The license must not discriminate against any person or group of persons. No Discrimination Against Fields of Endeavor The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research. Distribution of License The rights attached to the program must apply to all to whom the program is redistributed without the need for execution of an additional license by those parties. License Must Not Be Specific to a Product The rights attached to the program must not depend on the program's being part of a particular software distribution. If the program is extracted from that distribution and used or distributed within the terms of the program's license, all parties to whom the program is redistributed should have the same rights as those that are granted in conjunction with the original software distribution. License Must Not Restrict Other Software The license must not place restrictions on other software that is distributed along with the licensed software. For example, the license must not insist that all other programs distributed on the same medium must be open-source software. License Must Be Technology-Neutral No provision of the license may be predicated on any individual technology or style of interface. These definitions, although they derive from very different ideologies, are broadly compatible, and most Free Software is also Open Source Software and vice versa. I believe, however, that it is possible for this not to be the case: It is possible for software to be Open Source without being Free, or to be Free without being Open Source. Questions Is my belief correct? Is it possible for software to fall into one camp and not the other? Does any such software actually exist? Please give examples. Clarification I've already accepted an answer now, but I seem to have confused a lot of people, so perhaps a clarification is in order. I was not asking about the difference between copyleft (or "viral", though I don't like that term) and non-copyleft ("permissive") licenses. Nor was I asking about your personal idiosyncratic definitions of "Free" and "Open". I was asking about "Free Software as defined by the FSF" and "Open Source Software as defined by the OSI". Are the two always the same? Is it possible to be one without being the other? And the answer, it seems, is that it's impossible to be Free without being Open, but possible to be Open without being Free. Thank you everyone who actually answered the question.

    Read the article

  • Getting 401 when using client certificate with IIS 7.5

    - by Jacob
    I'm trying to configure a web site hosted under IIS 7.5 so that requests to a specific location require client certificate authentication. With my current setup, I still get a "401 - Unauthorized: Access is denied due to invalid credentials" when accessing the location with my client cert. Here's the web.config fragment that sets things up: <location path="MyWebService.asmx"> <system.webServer> <security> <access sslFlags="Ssl, SslNegotiateCert"/> <authentication> <windowsAuthentication enabled="false"/> <anonymousAuthentication enabled="false"/> <digestAuthentication enabled="false"/> <basicAuthentication enabled="false"/> <iisClientCertificateMappingAuthentication enabled="true" oneToOneCertificateMappingsEnabled="true"> <oneToOneMappings> <add enabled="true" certificate="MIICFDCCAYGgAwIBAgIQ+I0z6z8OWqpBIJt2lJHi6jAJBgUrDgMCHQUAMCQxIjAgBgNVBAMTGURldiBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwHhcNMTAxMjI5MjI1ODE0WhcNMzkxMjMxMjM1OTU5WjAaMRgwFgYDVQQDEw9kZXYgY2xpZW50IGNlcnQwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBANJi10hI+Zt0OuNr6eduiUe6WwPtyMxh+hZtr/7eY3YezeJHC95Z+NqJCAW0n+ODHOsbkd3DuyK1YV+nKzyeGAJBDSFNdaMSnMtR6hQG47xKgtUphPFBKe64XXTG+ueQHkzOHmGuyHHD1fSli62i2V+NMG1SQqW9ed8NBN+lmqWZAgMBAAGjWTBXMFUGA1UdAQROMEyAENGUhUP+dENeJJ1nw3gR0NahJjAkMSIwIAYDVQQDExlEZXYgQ2VydGlmaWNhdGUgQXV0aG9yaXR5ghB6CLh2g6i5ikrpVODj8CpBMAkGBSsOAwIdBQADgYEAwwHjpVNWddgEY17i1kyG4gKxSTq0F3CMf1AdWVRUbNvJc+O68vcRaWEBZDo99MESIUjmNhjXxk4LDuvV1buPpwQmPbhb6mkm0BNIISapVP/cK0Htu4bbjYAraT6JP5Km5qZCc0iHZQJZuch7Uy6G9kXQXaweJMiHL06+GHx355Y="/> </oneToOneMappings> </iisClientCertificateMappingAuthentication> </authentication> </security> </system.webServer> </location> The client certificate I'm using in my web browser matches what I've placed in the web.config. What am I doing wrong here?

    Read the article

  • Dell Poweredge 2600 RAID Transfer How-to

    - by DCookie
    Help, please! Hardware: Dell Poweredge 2600 PERC 4 SCSI Drives, 1 standalone 3 in a RAID 5 configuration OS: Windows 2000 Server In other words, a fairly old system. Anyway, we are in the process of taking over support for this site. The current tech wants out and is fading from view fast, so we need to solve this problem: The standalone disk (where the OS was) failed. We've replaced the disk, installed the OS, but need to know exactly how to proceed from here. I've never worked with a RAID system before, so I don't want to touch anything without knowing what I'm doing. We are not certain if the site will want us to attempt to recover the array or wait for the old tech to become available. We have replaced the server with a temporary box, and recovered MOST of the data from an online backup service. However, the other tech failed to backup a part of the data and the only copy of it is on this RAID array. Hence, our caution. We have poked minimally around in the boot-up PERC config utility, and it seems to me that that's where we'll need to be to reclaim the array. Another possibility is that there is some Dell software for the RAID controller we need to acquire. Can anyone provide clues as to how to proceed from here? Any help GREATLY appreciated.

    Read the article

  • How can I fix my corrupted RAID1 ext4 partition on a Synology DS212 NAS?

    - by Neil
    I have two identical 3 TB disks that were in a RAID1 array, where one disk crashed. I replaced the failed disk, but not after the RAID partitions got messed up. I need to figure out how to restore the RAID array and get at my ext4 partition. Here are the properties of the surviving disk: # fdisk -l /dev/sda fdisk: device has more than 2^32 sectors, can't use all of them Disk /dev/sda: 2199.0 GB, 2199023255040 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 267350 2147483647+ ee EFI GPT # parted /dev/sda print Model: ATA ST3000DM001-9YN1 (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 131kB 2550MB 2550MB ext4 raid 2 2550MB 4698MB 2147MB linux-swap(v1) raid 5 4840MB 3001GB 2996GB raid I replaced the failed drive, and cloned the surviving drive to it so I have something to work with. I cloned the drives with dd if=/dev/sdb of=/dev/sda conv=noerror bs=64M, and now /dev/sda and /dev/sdb are identical. Here is the RAID information: # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid1 sdb2[1] 2097088 blocks [2/1] [_U] md0 : active raid1 sdb1[1] 2490176 blocks [2/1] [_U] unused devices: <none> It seems that md2 is missing. Here is what testdisk 6.14-WIP finds: Disk /dev/sda - 3000 GB / 2794 GiB - CHS 364801 255 63 Current partition structure: Partition Start End Size in sectors 1 P Linux Raid 256 4980735 4980480 [md0] 2 P Linux Raid 4980736 9175039 4194304 [md1] Invalid RAID superblock 5 P Linux Raid 9453280 5860519007 5851065728 5 P Linux Raid 9453280 5860519007 5851065728 # After a quick search Disk /dev/sda - 3000 GB / 2794 GiB - CHS 364801 255 63 Partition Start End Size in sectors D MS Data 256 4980607 4980352 [1.41.12-2197] D Linux Raid 256 4980735 4980480 [md0] D Linux Swap 4980736 9174895 4194160 D Linux Raid 4980736 9175039 4194304 [md1] >P MS Data 9481056 5858437983 5848956928 [1.41.12-2228] And listing the files on the last partition in the list shows all of my files intact. What should I do?

    Read the article

  • Error when trying to deploy Windows XP SP3 with WDS

    - by Nic Young
    I have created a WDS server running Windows Server 2008 R2. I have built my custom images of Windows 7 using WAIK and MDT 2010 that are installed on the server. I used this guide to help me through the process. The Windows 7 images that I have created capture and deploy properly. I am attempting to follow the same steps from the guide I linked to capture and deploy a Windows XP SP3 image. I am able to sysprep and capture the reference machine with no errors. I am then able to import the custom .wim that I just captured in to MDT 2010 with no issues either. However when I try to deploy this image to a test virtual machine I get the following error: Deployment Error: I have made sure that the .iso that I am importing the source files from originally to create the sysprep and capture sequence is indeed a Windows XP SP3 iso. When I first select a PE boot environment before I deploy I select the x86 PE boot image that I created originally when making this for my Windows 7 deployments. Could this be the issue? If so how do I make a boot image specific for Windows XP SP3 deployments? I have Googled around for this error and some places point to the deployment image not being able to find setup.exe and other important system files for installing the operating system. If so, how do I add these to the image? Any ideas?

    Read the article

  • Boot ISO image from GRUB4DOS on EFI machines

    - by Vladimir Tikhomirov
    I failed with loading ISO image (non-distro) from GRUB2 from USB stick, but found the way how I can boot the GRUB4DOS and then load the image from there. However, it doesn't work all the time and the questions is WHY it doesn't? Environment and loading process: We need to have EFI machine, USB stick, booting ISO, GRUB2 and GRUB4DOS. Last 3 on USB stick. Boot: USB - EFI loader - GRUB2 - GRUB4DOS - ISO image Configuration files To boot GRUB4DOS I use this from grub.cfg: menuentry "image.iso" { linux /syslinux/grub.exe --config-file="/menu.lst" } My menu.lst is here: timeout 20 default 0 title image.iso find --set-root --ignore-floppies --ignore-cd //image.iso map --heads=0 --sectors-per-track=0 //image.iso (hd32) map --hook chainloader (hd32) This works perfectly with Legacy machines. However, when I come to GRUB4DOS, I don't see the menu with image.iso, I see only GRUB command line. That means that my menu.lst didn't load. Why is it like this? Background and ideas I have an idea that GRUB4DOS doesn't recognize my USB stick as a device. I tried the command find and got (hd0,0), (hd0,1), (hd0,2), (rd). When I tried to set root to any of these devices I don't see fat file system, how it was with Legacy machines. The root device is (hd0,0), which has ntfs file system which should be partition with Windows. EFI machines support only GRUB2, so I can't boot GRUB4DOS straight away. Please, don't suggest anything like this, because my image doesn't have kernel. You can imagine that you load HDAT2 or Hiren's boot cd, for example. menuentry "Blancco Blancco5.iso" { set isofile="/image.iso" loopback loop $isofile set root=(loop) linux /isolinux/vmlinuz isofile=$isofile splash quiet initrd /isolinux/initrd }

    Read the article

  • Smarter, Faster, Cheaper: The Insurance Industry’s Dream

    - by Jenna Danko
    On June 3rd, I saw the Gaylord Resort Centre in Washington D.C. become the hub of C level executives and managers of insurance carriers for the IASA 2013 Conference.  Insurance Accounting/Regulation and Technology sessions took the focus, but there were plenty of tertiary sessions for career development, which complemented the overall strong networking side of the conference.  As an exhibitor, Oracle, along with several hundred other product providers, welcomed the opportunity to display and demonstrate our solutions and we were encouraged by hustle and bustle of the exhibition floor.  The IASA organizers had pre-arranged fast track tours whereby interested conference delegates could sign up for a series of like-themed presentations from Vendors, giving them a level of 'Speed Dating' introductions to possible solutions and services.  Oracle participated in a number of these, which were very well subscribed.  Clearly, the conference had a strong business focus; however, attendees saw technology as a key enabler to get their processes done smarter, faster and cheaper.  As we navigated through the exhibition, it became clear from the inquiries that came to us that insurance carriers are gravitating to a number of focus areas: Navigating the maze of upcoming regulatory reporting changes. For US carriers with European holdings, Solvency II carries a myriad of rules and reporting requirements. Alignment across the globe of the Own Risk and Solvency Assessment (ORSA) processes brings to the fore the National Insurance of Insurance commissioners' (NAIC) recent guidance manual publication. Doing more with less and to certainly expect more from technology for less dollars. The overall cost of IT, in particular hardware, has dropped in real terms (though the appetite for more has risen: more CPU, more RAM, more storage), but software has seen less change. Clearly, customers expect either to pay less or get a lot more from their software solutions for the same buck. Doing things smarter – A recognition that with the advance of technology to stand still no longer means you are technically going backwards. Technology and, in particular technology interactions with human business processes, has undergone incredible change over the past 5 years. Consumer usage (iPhones, etc.) has been at the forefront, but now at the Enterprise level ever more effective technology exploitation is beginning to take place. That data and, in particular gleaning knowledge from data, is refining and improving business processes.  Organizations are now consuming more data than ever before, and it is set to grow exponentially for some time to come.  Amassing large volumes of data is one thing, but effectively analyzing that data is another.  It is the results of such analysis that leads to improvements both in terms of insurance product offerings and the processes to support them. Regulatory Compliance, damned if you do and damned if you don’t! Clearly, around the globe at lot is changing from a regulatory perspective and it is evident that in terms of regulatory requirements, whilst there is a greater convergence across jurisdictions bringing uniformity, there is also a lot of work to be done in the next 5 years. Just like the big data, hidden behind effective regulatory compliance there often lies golden nuggets that can give competitive advantages. From Oracle's perspective, our Rating Engine, Billing, Document Management and Insurance Analytics solutions on display served to strike up good conversations and, as is always the case at conferences, it was a great opportunity to meet and speak with existing Oracle customers that we might not have otherwise caught up with for a while. Fortunately, I was able to catch up on a few sessions at the close of the Exhibition.  The speaker quality was high and the audience asked challenging, but pertinent, questions.  During Dr. Jackie Freiberg’s keynote “Bye Bye Business as Usual,” the author discussed 8 strategies to help leaders create a culture where teams consistently deliver innovative ideas by disrupting the status quo.  The very first strategy: Get wired for innovation.  Freiberg admitted that folks in the insurance and financial services industry understand and know innovation is important, but oftentimes they are slow adopters.  Today, technology and innovation go hand in hand. In speaking to delegates during and after the conference, a high degree of satisfaction could be measured from their positive comments of speaker sessions and the exhibitors. I suspect many will be back in 2014 with Indianapolis as the conference location. Did you attend the IASA Conference in Washington D.C.?  If so, I would love to hear your comments. Andrew Collins is the Director, Solvency II of Oracle Financial Services. He can be reached at andrew.collins AT oracle.com.

    Read the article

  • Top Tweets SOA Partner Community – November 2011

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity soacommunity SOA Community Dutch ACEs SOA Partner Community award celebration wp.me/p10C8u-i9 OracleBPM Gauging Maturity of your BPM Strategy – part 1/2, bit.ly/vJE9UZ MagicChatzi Dutch ACE’s and ACE Directors had a small party: achatzia.blogspot.com/2011/11/celebr… leonsmiers #Capgemini #Oracle #BPM Blog index bit.ly/tUYtvD #yam lucasjellema Blog post by my colleague Emiel on the AMIS blog: Timeouts in Oracle SOA Suite 11g – tinyurl.com/73amo3r biemond Solving __OAUX_GENXSD_.TOP.XSD with BPEL: When you use an external web service in combination with a BPEL servic… t.co/Gzzatzrr OracleBlogs Jumpstart Fusion Middleware projects with Oracle User Productivity Kit ow.ly/1fJMev cpurdy on Oracle Coherence data grid, its new RESTful APIs, and Oracle Service Bus (OSB): blogs.oracle.com/slc/entry/orac… Accenture Learn how Service-Oriented Architecture can help public service agencies solve legacy system issues. bit.ly/sTteM4 #SOA eelzinga Thanks for organising it Andreas! #soacommunity eelzinga Had a nice drink with the fellow Dutch Oracle ACE members for a little celebration of the SOA Community Partner Award. #soacommunity EmielP Wrote a blogpost about timeouts in the #Oracle #SOA Suite: bit.ly/uhUcrX OracleBlogs Processing Binary Data in SOA Suite 11g t.co/Tzd1xBsY OracleBlogs Finding the Value in SOA by Stephen Bennett t.co/9MMLJoLz OTNArchBeat SOA All the Time; Architects in AZ; Clearing Info Integration hurdles t.co/5viNj8ib OracleBlogs Demo: Business Transaction Management with SOA Management Pack ow.ly/1fFBv3 OTNArchBeat SOA All the Time; Architects in AZ; Clearing Info Integration hurdles t.co/Dnfzo0PN oracletechnet Wikis.oracle.com lives leonsmiers A new #capgemini #oracle #blog, Measuring the Human Task activity in Oracle BPM bit.ly/uPan08 #yam @CapgeminiOracle OTNArchBeat 3 SOA business cases, explained in a 2-minute elevator speech | @JoeMcKendrick t.co/aYGNkZup OTNArchBeat Gartner, Inc. places Oracle SOA Governance in Magic Quadrant for SOA Governance Technologies t.co/bSG5cuTr Jphjulstad Red carpet to Oracle BPM – evita.no evita.no/ikbViewer/soa-… Oracle #Oracle Named a Leader in #SOA Governance Magic Quadrant by Leading Analyst Firm t.co/prnyGu2U soacommunity What presentations & topics do you like to see at the next SOA & BPM & Webcenter Community Forum early 2012? #soacommunity soacommunity Oracle BPM Suite 11g Handbook Released wp.me/p10C8u-hU OTNArchBeat SOA Development Virtual Developer Day (On Demand) | @soacommunity bit.ly/sqhQmX OracleBlogs SOA Development Virtual Developer Day (On Demand) t.co/MDrdnx0h 9 Nov Favorite Undo Retweet Reply OracleBlogs Specialized Partners Only! New Service to Promote Your Events t.co/qTgyEpY4 biemond @stevendavelaar this is for you t.co/hInKCcfY it explains your sso problem soacommunity SOA Development Virtual Developer Day (on demand) t.co/flXPWk4R soacommunity IPT Swiss SOA Experts – thanks for the nice ink wp.me/p10C8u-i3 soacommunity Enjoy #wjax specially the presentations from our #ACE @t_winterberg @myfear @AdamBien pic.twitter.com/m8VcBSG3 OTNArchBeat Discounts on books, more, for Oracle Technology Network members bit.ly/vRxMfB OracleSOA Justify the ROI of SOA in 10 seconds…a pic is worth 1000 words bit.ly/roi_of_soa_img #oraclesoa #soa #oow11 orclateamsoa A-Team SOA Blog: Case Management in BPM 11g -  Mark Foster Oracle BPM 11g & Case Management I’ve seen… t.co/l5zb6pFr t_winterberg Die nächste SIG #SOA steht an: 7.12. in Hamburg. Neues Tooling und Erfahrungen rund um Oracle FMW, SOA, BPM… (cont) deck.ly/~YC57v OracleBlogs Continuous Integration for SOA/BPM ow.ly/1fsekI OracleBlogs BPM Suite 11g Handbook Released ow.ly/1frlzv lucasjellema Iterating over collection (array) in BPM (and dispatching jobs for entries in array): t.co/1SEhSvWv – subprocesses are the key. lucasjellema Lucas Jellema Useful tip from Mark Nelson: BPM API documentation (as well as Human Workflow Service) available: redstack.wordpress.com/2011/09/28/api… OTNArchBeat SOA, cloud: it’s the architecture that matters | Joe McKendrick zd.net/tNCiTF orclateamsoa: Building a job dispatcher in BPM -or- Iterating over collections in BPM ow.ly/1frbrz orclateamsoa Using the Database as a Policy Store for SOA 11g ow.ly/1frbrA OracleBPM Oracle launches Process Accelerators for BPM: t.co/XPEE61QL Jphjulstad Human-Centric BPM Selection Checklist t.co/3TZXZHLH OracleBlogs Fusion Middleware General Session at OOW 2011: Missed It? Read On… t.co/aU5JvM6K gschmutz Great! The product page of the OSB 11g Development Cookbook is now online: t.co/5Jfbe6Ng Looking forward to get it, u too? brhubart Oracle IT Architecture Essentials; Lightweight Composite Service Development with SCA and Spring; Cloud Migration ow.ly/7esNg eelzinga New blogpost : Oracle Service Bus, Generic fault handling, bit.ly/sGr4UL #osb #oracleservicebus For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity,twitter,Oracle,SOA Community,Jürgen Kress,OPN

    Read the article

  • IIS URL Rewrite HTTP to HTTPS with Port

    - by Andy Arismendi
    My website has two bindings: 1000 and 1443 (port 80/443 are in use by another website on the same IIS instance). Port 1000 is HTTP, port 1443 is HTTPS. What I want to do is redirect any incoming request using "htt p://server:1000" to "htt ps://server:1443". I'm playing around with IIS 7 rewrite module 2.0 but I'm banging my head against the wall. Any insight is appreciated! BTW the rewrite configuration below works great with a site that has an HTTP binding on port 80 and HTTPS binding on port 443, but it doesn't work with my ports. P.S. My URLs intentionally have spaces because the 'spam prevention mechanism' kicked in. For some reason google login doesn't work anymore so I had to create an OpenID account (No Script could be the culprit). I'm not sure how to get XML to display nicely so I added spaces after the opening brackets. < ?xml version="1.0" encoding="utf-8"? < configuration < system.webServer < rewrite < rules < rule name="HTTP to HTTPS redirect" stopProcessing="true" < match url="(.*)" / < conditions trackAllCaptures="true" < add input="{HTTPS}" pattern="off" / < /conditions < action type="Redirect" redirectType="Found" url="htt ps: // {HTTP_HOST}/{R:1}" / < /rule < /rules < /rewrite < /system.webServer < /configuration

    Read the article

  • CentOS 5.5 ext4 conversion problem - ext4 partition is recognized as ext3

    - by FractalizeR
    Hello. I had 5.4 machine. Upgraded to 5.5 today via yum upgrade. All went fine. Rebooted. Wanted to convert root partition to ext4 (I have three partitions: /boot, / and swap). All of them on software RAID 1 (root is /dev/md2). I did the following for converting yum install e4fsprogs tune2fs -O extents,uninit_bg,dir_index /dev/md2 nano /etc/fstab # I indicated here that my /dev/md2 is of ext4 uname -a mkinitrd -f /boot/initrd-2.6.18-194.3.1.el5.img 2.6.18-194.3.1.el5 Rebooted. I expected fsck to start automatically as said on some site. But it did not. Threw some error (don't remember exactly which). Ok, I booted linux rescue and executed fsck: fsck -t ext4 -fy /dev/md2 Partition went fine. But still when I boot main system, it says in log: "ext3-fs:" then something about not being able to mount ext3 partition due to unknown extended attributed (200). I booted linux rescue again. It loads fine and correctly determines all my machine partitions both ext3 (boot) and ext4 (/) under /mnt/sysimage just fine. I retried mkinitrd thing again watching it's output and ensured ext4 module is included into the system. I also edited menu.lst grub file to include rootfstype=ext4 kernel parameter. Bad luck. I still have message from ext3-fs about not being able to mount filesystem because of attributes and kernel panic immediately after. I checked /etc/fstab - it's fine and saying that root is of ext4. What did I do wrong? This machine is empty so I can just reformat it with 5.5 and recreate partitions to be originally ext4. But... I just want to know what did I do wrong.

    Read the article

  • Windows NT from vmware to kvm

    - by Luca Rossi
    I'm trying to convert a couple of old Windows NT virtual servers from vmware to KVM. I tried almost all guidelines and how to I found around the web but with no luck. I have the vmware virtual disk: Dlc1.vmdk partitioned image. I converted the vmdk into qcow2 image with the qemu utility and I tried to use it with kvm: kvm -hda test.qemu -vnc :1 -m 750 but I receive "error loading operating system" I also tried with raw partitions I can mount through losetup and kpartx. but nothing changed I also tried to create an brand new image file with: qemu-img create -f qcow2 test.qcow2 2G I partitioned the new image file and I copied the original partition 1 to the new partition 1 with dd: dd if=/dev/mapper/loop1p1 of=/dev/mapper/loop0p1 bs=128M no luck again I also tried with a single unpartitioned file: qemu-img create -f qcow2 test.qcow2 2G and I copied the partition 1 to the new image file: dd if=/dev/mapper/loop0p1 of=test.img bs=128M but when booting, I receive a black screen and the virtual machine hangs. The bootloader is loaded successfully, because I also tried with a GRUB live iso and I receive the same screens and errors. Note that grub sees the Windows setup and give me the boot choice. I have the suspect the problem is that the vmware machine is probably a scsi guest and in centos 6 (my system) scsi emulation is no longer supported. But in that case, where to change in Windows? I'm not so skilled with MS systems. Thank you for the help Luca Rossi

    Read the article

  • ADSL throughput loss from Reed-Solomon encoding

    - by javano
    I'm reading about ADSL starting here and I am confused by how the Reed-Solomon encoding for ECC is limiting the available transfer rate, as much as it does (nearly half). This pdf on the same subject contains the following; A maximum of 255 sub-carriers can be used to modulate data in the downstream direction. Sub-carrier 256, the downstream Nyquist frequency, and sub-carrier 64, the downstream pilot frequency, are not available for user data, thus limiting the total number of available downstream sub-carriers to 254. Each of these 254 sub-carriers can support the modulation of 0 to 15 bits. Since the ADSL DMT data frame rate is 4000 frames per second, the maximum theoretical downstream data rate of an ADSL system is 15.24Mbps. Due to limitations in system architecture, specifically the maximum allowable Reed-Solomon codeword size (255 bytes), the maximum achievable downstream data rate is 8.16Mbps. How is this nearly halving the throughput? Is all that extra bandwidth overhead of the RS encoding? 15240000 bps (15.24Mbps) - 8160000 bps (8.12Mbps) = 7080000 bps (7.08Mbps). Where has that 7Mbps of throughput gone? EDIT: I tried to read the wiki page on Reed-Soloman but it's all crazy maths and algerbra, which I don't understand. I can understand that data is split into 255 byte codewords, because that maybe the max codeword size whilst still maintaining accuracy during transmission; But I don't understand why that means less data is sent?

    Read the article

  • File upload permissions issue on Windows Server 2008 R2 IIS 7.5 PHP 5.3 with Drupal v.7.26

    - by Taras
    I have website on Drupal version: 7.26 OS on server is Windows Server 2008 R2 Web server $_SERVER["SERVER_SOFTWARE"]: Microsoft-IIS/7.5 Server API: CGI/FastCGI Core PHP Version: 5.3.28 file_uploads: On post_max_size: 75M upload_max_filesize: 50M upload_tmp_dir: C:\inetpub\wwwroot\tmp memory_limit: 128M open_basedir: C:\inetpub\wwwroot;C:\inetpub\wwwroot\tmp When I go to /admin/config/media/file-system I see error messages: The directory sites\default\files exists but is not writable and could not be made writable. The directory tmp exists but is not writable and could not be made writable. Public file system path: sites\default\files Temporary directory: tmp I have set permissions on folders C:\inetpub\wwwroot\tmp : IIS_IUSRS : Full control C:\inetpub\wwwroot\sites\default\files : IIS_IUSRS : Full control I am working as Administrator user: C:\Users\Administrator\Downloadsecho %username% Administrator I can`t change Read Only Attributes for these folders. Every time I do this change and press Apply button and Apply changes to this folder, subfolders and files is checked and press OK button it displays Applying attributes... dialog when it finishing I press OK button on folder properties dialog closing it. When I open Properties dialog once again I see Read-only is checked again. How can I fix it?

    Read the article

  • Why datacenter water cooling is not widespread?

    - by MainMa
    From what I read and hear about datacenters, there are not too many server rooms which use water cooling, and none of the largerst datacenters use water cooling (correct me if I'm wrong). Also, it's relatively easy to buy an ordinary PC components using water cooling, while water cooled rack servers are nearly nonexistent. On the other hand, using water can possibly (IMO): Reduce the power consumption of large datacenters, especially if it is possible to create direct cooled facilities (i.e. the facility is located near a river or the sea). Reduce noise, making it less painful for humans to work in datacenters. Reduce space needed for the servers: On server level, I imagine that in both rack and blade servers, it's easier to pass the water cooling tubes than to waste space to allow the air to pass inside, On datacenter level, if it's still required to keep the alleys between servers for maintenance access to servers, the empty space under the floor and at the ceiling level used for the air can be removed. So why water cooling systems are not widespread, neither on datacenter level, nor on rack/blade servers level? Is it because: The water cooling is hardly redundant on server level? The direct cost of water cooled facility is too high compared to an ordinary datacenter? It is difficult to maintain such system (regularly cleaning the water cooling system which uses water from a river is of course much more complicated and expensive than just vacuum cleaning the fans)?

    Read the article

  • Large virtual memory size of ElasticSearch JVM

    - by wfaulk
    I am running a JVM to support ElasticSearch. I am still working on sizing and tuning, so I left the JVM's max heap size at ElasticSearch's default of 1GB. After putting data in the database, I find that the JVM's process is showing 50GB in SIZE in top output. It appears that this is actually causing performance problems on the system; other processes are having trouble allocating memory. In asking the ElasticSearch community, they suggested that it's "just" filesystem caching. In my experience, filesystem caching doesn't show up as memory used by a particular process. Of course, they may have been talking about something other than the OS's filesystem cache, maybe something that the JVM or ElasticSearch itself is doing on top of the OS. But they also said that it would be released if needed, and that didn't seem to be happening. So can anyone help me figure out how to tune the JVM, or maybe ElasticSearch itself, to not use so much RAM. System is Solaris 10 x86 with 72GB RAM. JVM is "Java(TM) SE Runtime Environment (build 1.7.0_45-b18)".

    Read the article

  • Enable RemoteApp Full Desktop programmatically

    - by Scott Chamberlain
    I am writing a powershell script to set up some HyperV VM's however there is one step I am having trouble automating. How do I check the box to allow Remote desktop access from the RemoteApp settings programmatically? I can set up all of my customizations I need by doing #build the secrity descriptor so the desktop only shows up for people who should be allowed to see it $remoteDesktopUsersSid = New-Object System.Security.Principal.SecurityIdentifier($remoteDesktopUsersGroup.objectSid[0],0) $aceTemplet = 'O:WDG:WDD:ARP(A;CIOI;CCLCSWLORCGR;;;{0})' $securityDescriptor = $aceTemplet -f $remoteDesktopUsersSid #get a copy of the WMI instance $tsRemoteDesktop = Get-WmiObject -Namespace root\CIMV2\TerminalServices -Class Win32_TSRemoteDesktop #set settings $tsRemoteDesktop.Name = $ServerDisplayName $tsRemoteDesktop.SecurityDescriptor = $securityDescriptor $tsRemoteDesktop.IconPath = $IconPath $tsRemoteDesktop.IconIndex = $IconIndex #push settings back to server Set-WmiInstance -InputObject $tsRemoteDesktop -PutType UpdateOnly however the instance of that WMI object does not exist until after you have the above box checked. I attempted to use Set-WmiInstance to instantiate and set the settings at the same time but I keep getting errors like: Set-WmiInstance : At line:53 char:16 + Set-WmiInstance <<<< -Namespace root\CIMV2\TerminalServices -Class Win32_TSRemoteDesktop -Arguments @{Alias='TSRemoteDesktop';Name=$ServerDisplayName;ShowInPortal=$true;SecurityDescriptor=$securityDescriptor} + CategoryInfo : NotSpecified: (:) [Set-WmiInstance], ArgumentException + FullyQualifiedErrorId : System.ArgumentException,Microsoft.PowerShell.Commands.SetWmiInstance (also after running the command and getting the error it will delete the instance of Win32_TSRemoteDesktop if it already exited and un-check the box in the properties setting) Is there any way to programmatically check that box or can anyone help with why Set-WmiInstance throws that error?

    Read the article

  • Cacti not working for SNMP data sources

    - by lorenzo-s
    I installed packages cacti and snmpd on a Debian server. I'm able to display common graphs in Cacti (such as memory usage, load average, logged in users, etc) using the data templates listed as Unix. Now I want to replace these graphs with new ones using SNMP data sources, because I see there is also CPU usage and because it's not excluded I have to manage multiple hosts in the future. So, I installed snmpd on the machine and left the snmpd.conf as it is. In Cacti, I created three new data sources from SNMP templates for 127.0.0.1 host: ucd/net - CPU Usage - Nice ucd/net - CPU Usage - System ucd/net - CPU Usage - User Then I created a new graph from template ucd/net - CPU Usage, and select the three data sources in the Graph Item Fields section. Graph is now enabled and running, but empty. No data have been collected. Under Console - Devices my SNMP host is listed as up and running: System:Linux ip-xx-xx-xxx-xxx 3.2.0-23-virtual #36-Ubuntu SMP Tue Apr 10 22:29:03 UTC 2012 x86_64 Uptime: 929267 (0 days, 2 hours, 34 minutes) Hostname: ip-xx-xx-xxx-xxx Location: Sitting on the Dock of the Bay Contact: Me [email protected] In SNMP Options I left all as it is: SNMP Version: Version 1 SNMP Community: public SNMP Timeout: 500 ms Maximum OID's Per Get Request: 10 In Console - Utilities - Cacti Log I have multiple warning (two for each data source) every 5 minutes: 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[2] DS[18] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.15.0' 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[1] DS[9] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.11.52.0' 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] Host[2] DS[19] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.6.0' [...] I have the feeling I'm missing something, but I cannot get it...

    Read the article

  • MSI Installer error 2203; how to force permissions on installer directory?

    - by goober
    [Cross-Posted on StackOverflow.com as well because the question relates to development. Feel free to let me know where it best belongs.] Hi all, I'll try to bullet-point to keep it short: Background / Issue Trying to install ASP.NET MVC 3 RC on my Windows 7 machine. Uninstalled other versions of MVC (2 and 3 Beta 1). Ran the installer -- got a generic error, 2203. Log files said that it was a permissions error on C:\Windows\Installer. Checked C:\Windows\Installer -- sure enough, it's marked as read-only. I un-checked "Read-Only" in the folder properties and applied. It appears to open the dialog and apply to all files. However, when clicking properties again, the read-only box is backed to checked. Checked the security tab of the folder -- both system and the Administrators group have full access. I checked ownership -- the Administrators group is listed as an owner. Verified that I'm in the system as an Administrator (in fact, the only account in the Administrators group besides Administrator). So, what gives? Thanks in advance for any help you can provide!

    Read the article

  • DCOM configuration: accounts with same name but different passwords problem

    - by archimed7592
    Hello, everybody! I'm experiencing troubles with DCOM configuration. Here is the case: I'm using some product which supports client-server interaction through DCOM, but the client won't get any access to the server if the attempt is being done from an account with a name which exists at the server as well, but has different password. Basically, if we try to access the server from the Administrator account which obviously present on the server machine, we will fail if client's Administrator password doesn't match server's one. After actively collaborating with the product's developer in attempts to localize the issue, he come across with resolution "can't be fixed" or, if you prefer to call a pikestaff a pikestaff than it's more likely a "don't know how to fix" resolution :). I believe there is a solution for this problem and I'm asking you, IT professionals, to help me out with this one. I do realize that the problem may be caused by the way the developer interact with DCOM and if so it can't be fixed be means of pure system configuration and the question should be asked at SO, but since I've bumped into the same behavior while working with file/printer sharing - Windows tried to simplify everything and used currently impersonated credentials to access the share, I hope the solution lies at system configuration layer. P.S. I believe that the actual software product I'm talking about is entirely irrelevant however my experience tell me that there always would be somebody who will think that it on the contrary is very relevant. Here it is: SpRecord.

    Read the article

  • Blue screen of death while installing any Adobe Air application

    - by Gaurav Sharma
    Whenever I try to install an Air application I get a Blue Screen and then my system restarts. I cannot even take a screenshot of it. This happens with every air application I try to install. I also searched for the same on Adobe forums and found the same problem being faced by someone else. His problem was resolved by uninstalling a software named "Folder Lock". I searched my hard disk for this software and found one, so I deleted that software (shift+delete) and removed all it's traces from registry too but that still doesn't solved the problem. I also tried disabling the antivirus software and then install the air application but this also didn't helped. Here is the screenshot of the BSOD. I was able to install air applications earlier, but now I can't. Anybody having same sort of problem. One colleague of mine is also having the same problem. Please help me out. My system's config is as follows: Windows XP Home sp3 Flash Builder 4, with SDK 4.1, 3.5 installed in it. Adobe Air v 2.5 1.5 GB RAM 1.66 MHz processor Thanks

    Read the article

  • How does it hurt to use Linux (Ubuntu) as a guest OS for all my tasks?

    - by sauparna
    I have a machine running Windows, where the disk has two partitions C (50 GB) and D (250GB). I do research in Information Retrieval and need to work with a large corpus (more than 50 GB) and in Linux. So if I want to install Linux on the existing system, keeping the Windows installation intact, will it be fine to run it in a virtual box? (say, QEMU, VMWare, etc.) An alternative is using Wubi. In that case the Linux installation has to be on drive C. Then, if I keep a small Linux installation (say 5GB) on C, and my corpus on D (mounted in Linux), how will it affect the performance of my programs which would be accessing the mounted Windows drive D. Is it feasible to use Linux this way? Which of the above is better if at all they are a way out? Note : Since my post in July 2010, I have been using and have tried several ways of maintaining a disk-image that I can mount in Linux. I had a 100GB qcow2 disk and a 100GB raw disk, both formatted to an EXT3 file system. I was mounting and connecting to the qcow2 disk using qemu-nbd. The problem was that every now and then, the connection to the disk would get lost and the running programs would throw disk I/O errors. The raw disk would mount and work fine as a loop mounted device, but when writing data to it, the mount.ntfs program would hog the CPU and the process would take an enormous amount of time. I was in fact running make on a piece of software located on this raw disk, and after a point of time make was waiting while mount.ntfs would show 100% CPU usage.

    Read the article

  • Sharepoint db issue after DB move to SQL 08

    - by JohnyV
    Recently we have moved our sharepoint 2007 db from sql 2000 server to 2008 x64 SQL server. All seems well, however there is a problem where the sql server stops running and the service has to be restarted. The errors mention insufficient internal memory etc. I have tried to start the db using -g384 which is the default in sql 2000 but 256 is default for 2008 I believe. This has not rectified the issue. I was advised that perhaps the issue may be rectified by upgrading to wss 3.0 sp2 however When I have tried to install this i get another error post sp2 update and have to refer back to a vm snapshot. The error after the service pack is Server error: http://go.microsoft.com/fwlink?LinkID=96177 So I guess I have a few questions How can I fix the first issue and the 2nd issue. I have checked out many forums and posts and have tried a few things and still get no joy. Any assistance would be great. UPDATE I have fixed the Server error: http://go.microsoft.com/fwlink?LinkID=96177 the i needed to run the wss sp2 as well as the office servers sp2 then the config wizard then the moss configuration worked. The errors I am getting in SQL are SQL Server was unable to run a new system task, either because there is insufficient memory or the number of configured sessions exceeds the maximum allowed in the server. Verify that the server has adequate memory. Use sp_configure with option 'user connections' to check the maximum number of user connections allowed. Use sys.dm_exec_sessions to check the current number of sessions, including user processes. A read operation on a large object failed while sending data to the client. A common cause for this is if the application is running in READ UNCOMMITED isolation level. The connection will be terminated. There is insufficient system memory in resource pool 'internal' to run this query. These errors are by a user that was created as a service for sharepoint.

    Read the article

< Previous Page | 752 753 754 755 756 757 758 759 760 761 762 763  | Next Page >