Search Results

Search found 34016 results on 1361 pages for 'filesystem access'.

Page 380/1361 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • Link instead of Attaching

    - by Daniel Moth
    With email storage not being an issue in many companies (I think I currently have 25GB of storage on my email account, I don’t even think about storage), this encourages bad behaviors such as liberally attaching office documents to emails instead of sharing a link to the document in SharePoint or SkyDrive or some file share etc. Attaching a file admittedly has its usage scenarios too, but it should not be the default. I thought I'd list the reasons why sharing a link can be better than attaching files directly. In no particular order: Better Review. It allows multiple recipients to review the file and their comments are aggregated into a single document. The alternative is everyone having to detach the document, add their comments, then send back to you, and then you have to collate. Wirth the alternative, you also potentially miss out on recipients reading comments from other recipients. Always up to date. The attachment becomes a fork instead of an always up to date document. For example, you send the email on Thursday, I only open it on Tuesday: between those days you could have made updates that now I am missing because you decided to share a link instead of an attachment. Better bookmarking. When I need to find that document you shared, you are forcing me to search through my email (I may not even be running outlook), instead of opening the link which I have bookmarked in my browser or my collection of links in my OneNote or from the recent/pinned links of the office app on my task bar, etc. Can control access. If someone accidentally or naively forwards your link to someone outside your group/org who you’d prefer not to have access to it, the location of the document can be protected with specific access control. Can add more recipients. If someone adds people to the email thread in outlook, your attachment doesn't get re-attached - instead, the person added is left without the attachment unless someone remembers to re-attach it. If it was a link, they are immediately caught up without further actions. Enable Discovery. If you put it on a share, I may be able to discover other cool stuff that lives alongside that document. Save on storage. So this doesn't apply to me given my opening statement, but if in your company you do have such limitations, attaching files eats up storage on all recipients accounts and will also get "lost" when those people archive email (and lose completely at some point if they follow the company retention policy). Like I said, attachments do have their place, but they should be an explicit choice for explicit reasons rather than the default. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Internet stopped working suddenly on 12.04

    - by Daniel
    My laptop was running smoothly until yesterday. Today, I can't connect to the Internet at home anymore. I am only able to access the router, but no Internet access. A have a Dell Latitude E6320 with Ubuntu 12.04. At my job, I don't have any problems connecting this laptop both via Wireless and Ethernet. At home, if I try connecting it through Windows, it does work fine. I even checked the MAC address and it's OK. My other laptop, which also runs Ubuntu, is not facing this problem. I have already tried to restart and downgrade network-manager package and its dependencies. Can anyone help me please? I am afraid, I will have to reinstall everything.

    Read the article

  • Enterprise with eyes on NoSQL

    - by thegreeneman
    Since joining Oracle a few months back, I have had the fortune of being able to interact with a number of large enterprise organizations and discuss their current state of adoption for NoSQL database technology.   It is worth noting that a large percentage of these organizations do have some NoSQL use and have been steadily increasing their understanding of its applicability for certain data management workloads.   Thru those discussions I’ve learned that it seems one of the biggest issues confronting enterprise adoption of NoSQL databases is the lack of standards for access, administration and monitoring.    This was not so much of an issue with the early adopters of NoSQL technology because they employed a highly DevOps centric approach to application deployment leaving a select few highly qualified developers with the task of managing the production of the system that they designed and implemented. However, as NoSQL technology moves out of the startup and into the hands of larger corporate entities, developers with a broad skill set that are capable of both development and I.T. type production management are in short supply and quickly get moved on to do new projects, often moving to different roles within the company.  This difference in the way smaller more agile startups operate as compared to more established companies is revealing a gap in the NoSQL technology segment that needs to get addressed.    This is one of places that a company such as Oracle has a leg up in the NoSQL Database front.  A combination of having gone thru a past database maturization process,  combined with a vast set of corporate relationships that have grown hand in hand to solve these types of issues, Oracle is in a great place to lead the way in closing the requirements gap for NoSQL technology.  Oracle's understanding of the needs specific to mature organizations have already made their way into the Oracle’s NoSQL Database offering with features such as:  One click cluster deployment with visual topology planning,  standards based monitoring protocols such as SNMP, support for data access for reporting via standard SQL  and integration with emerging standards for data access such as MapReduce.  Given the exciting developments we’re driving in the Oracle NoSQL Database group, I will have a lot more to say about this topic as we move into the second half of the year.

    Read the article

  • Options for secure git -repo hosting?

    - by hhh
    I need a secure git -repo host, either by third-party or by myself. I am not sure how so outlining some ideas. Please, answer how you manage git -repos securely -- do you use some service or do you use only your 'legs' -approach? Afaik Bitbucket.org and Github.com are missing Gmail -style second-verification. Now I need this kind of login-system with password and mobile-phone to access the administration things in the git -hosting or ability to disable this kind of access without private -key. Host it oneself (not sure about details) other? Perhaps related http://stackoverflow.com/questions/11007679/how-can-i-host-git-repositories-and-manage-my-content-hosting-myself

    Read the article

  • How to connect two ubuntu laptops using putty?

    - by VanillaTwilight
    I am trying to connect two ubuntu laptops(server and client) using putty here is the description : LAPTOP A : UBUNTU SERVER 12.04 KERNEL : 3.2.0 LAPTOP B : UBUNTU 12.04 KERNEL : 3.2.0 Initially , made following attempts: connecting ubuntu server to the internet( wireless network) using Referred link : www.ubuntuforums.org/showthread.php?t=1740726 installed openssh-server in the Laptop A. putty in Laptop B seen the different options in the /etc/ssh/sshd_config shows 'Access Denied' while trying to login in (after i have entered the correct password ) from the laptop B Referred link : http://naveenubuntu.blogspot.in/2012/08/receiving-access-denied-just-after.html Doesnt work. Please help.

    Read the article

  • Run Java application as a different user [on hold]

    - by Harihar Das
    I need to run a few perl scripts from a Java GUI application. I am using Runtime API to do that. However, few of the perl scripts need to run under a specific user account to have special credentials to access specialized resources (e.g. Database, Files). I have heard of alleviating user access using UAC. But till now I am not able to find the solution. Please help me on how to run a process under a different user login. Is there anything similar to c# impersonization in Java?

    Read the article

  • Redirect public traffic to a different subfolder, while local traffic remains unchanged

    - by ecnepsnai
    I would like to have local (intranet) HTTP traffic go to the /var/www/html folder while any public traffic goes to the subfolder, /var/www/html/public I've tried this configuration, with some variation, in httpd.conf <VirtualHost PRIVATE-IP> DocumentRoot /var/www/html ServerName ecn ErrorLog /var/www/logs/error/private CustomLog /var/www/logs/access/private common </VirtualHost> <VirtualHost PUBLIC-IP> DocumentRoot /var/www/html/public ServerName PUBLIC-DOMAIN-NAME ErrorLog /var/www/logs/error/public CustomLog /var/www/logs/access/public common </VirtualHost> PUBLIC-IP, PRIVATE-IP, and PUBLIC-DOMAIN name are all replaced with the correct values in the actual document. The problem is, local traffic can browse fine but remote traffic is directed to the root folder and getting 403d (because I have that folder blocked off through my .htaccess file). If I append /public to the URL it works fine.

    Read the article

  • 12.04 server on home network

    - by dustin mantei
    I need advice to help me with a server install, I'm new to ubuntu and linux in general. I have 6 systems in my house, 5 being windows 7 and this laptop that I am typing on is Linux mint 13 Maya. Question is, my wife will not transition to anything. She is stuck in Gates-land. so, can I make a server with ubuntu 12.04 (burnt disc image last night) so that all systems in my home can access and my mother-in-law in another state can also access with a username? That would be awesome and it may convince the war dept (my wife) to change all the systems in my house to ubuntu/linux. Sorry this is so long winded, but all the questions I have seen on this forum don't answer it completely.

    Read the article

  • Grub errors dual boot Windows 8 / Ubuntu 12.10

    - by luca-mastro
    I have got a newly bought ASUS N56V with Windows 8 preinstalled. I needed to install Ubuntu so i partitioned the disk and after having disabled the Secure Boot option from Windows 8 i successfully installed Ubuntu 12.10 from a Live USB. The problem is that if i try to access both to Windows 8 (loader) and Windows Recovery System (loader) from the grub menu, these tow errors show: "can't find command 'drivemap' " and "invalid EFI file path" and it goes back to the grub menu. In conclusion I do not have access to my Windows 8 partition but can only use Ubuntu. How can I solve the problem? I am pretty new to the matter. Thank you!

    Read the article

  • Can Ubuntu be installed in a subdirectory of another Linux variant?

    - by Reid
    I have access to (but not root on) a compute server which is running a Linux distribution that is a few years old. I'd much prefer to use a current Debian-like flavor. Thus, I'm wondering if it is possible to install Ubuntu (or stock Debian) in one of my directories, and use the Ubuntu programs and libraries in preference to what comes with the server. I would need to access arbitrary parts of the server's filesystem, not just the parts under the Ubuntu install. I log in by SSH, so there's no desktop environment needed. But, I would like to be able to use X programs.

    Read the article

  • Building a Repository Pattern against an EF 5 EDMX Model - Part 1

    - by Juan
    I am part of a year long plus project that is re-writing an existing application for a client.  We have decided to develop the project using Visual Studio 2012 and .NET 4.5.  The project will be using a number of technologies and patterns to include Entity Framework 5, WCF Services, and WPF for the client UI.This is my attempt at documenting some of the successes and failures that I will be coming across in the development of the application.In building the data access layer we have to access a database that has already been designed by a dedicated dba. The dba insists on using Stored Procedures which has made the use of EF a little more difficult.  He will not allow direct table access but we did manage to get him to allow us to use Views.  Since EF 5 does not have good support to do Code First with Stored Procedures, my option was to create a model (EDMX) against the existing database views.   I then had to go select each entity and map the Insert/Update/Delete functions to their respective stored procedure. The next step after I had completed mapping the stored procedures to the entities in the EDMX model was to figure out how to build a generic repository that would work well with Entity Framework 5.  After reading the blog posts below, I adopted much of their code with some changes to allow for the use of Ninject for dependency injection.http://www.tcscblog.com/2012/06/22/entity-framework-generic-repository/ http://www.tugberkugurlu.com/archive/generic-repository-pattern-entity-framework-asp-net-mvc-and-unit-testing-triangle IRepository.cs public interface IRepository : IDisposable where T : class { void Add(T entity); void Update(T entity, int id); T GetById(object key); IQueryable Query(Expression> predicate); IQueryable GetAll(); int SaveChanges(); int SaveChanges(bool validateEntities); } GenericRepository.cs public abstract class GenericRepository : IRepository where T : class { public abstract void Add(T entity); public abstract void Update(T entity, int id); public abstract T GetById(object key); public abstract IQueryable Query(Expression> predicate); public abstract IQueryable GetAll(); public int SaveChanges() { return SaveChanges(true); } public abstract int SaveChanges(bool validateEntities); public abstract void Dispose(); } One of the issues I ran into was trying to do an update. I kept receiving errors so I posted a question on Stack Overflow http://stackoverflow.com/questions/12585664/an-object-with-the-same-key-already-exists-in-the-objectstatemanager-the-object and came up with the following hack. If someone has a better way, please let me know. DbContextRepository.cs public class DbContextRepository : GenericRepository where T : class { protected DbContext Context; protected DbSet DbSet; public DbContextRepository(DbContext context) { if (context == null) throw new ArgumentException("context"); Context = context; DbSet = Context.Set(); } public override void Add(T entity) { if (entity == null) throw new ArgumentException("Cannot add a null entity."); DbSet.Add(entity); } public override void Update(T entity, int id) { if (entity == null) throw new ArgumentException("Cannot update a null entity."); var entry = Context.Entry(entity); if (entry.State == EntityState.Detached) { var attachedEntity = DbSet.Find(id); // Need to have access to key if (attachedEntity != null) { var attachedEntry = Context.Entry(attachedEntity); attachedEntry.CurrentValues.SetValues(entity); } else { entry.State = EntityState.Modified; // This should attach entity } } } public override T GetById(object key) { return DbSet.Find(key); } public override IQueryable Query(Expression> predicate) { return DbSet.Where(predicate); } public override IQueryable GetAll() { return Context.Set(); } public override int SaveChanges(bool validateEntities) { Context.Configuration.ValidateOnSaveEnabled = validateEntities; return Context.SaveChanges(); } #region IDisposable implementation public override void Dispose() { if (Context != null) { Context.Dispose(); GC.SuppressFinalize(this); } } #endregion IDisposable implementation } At this point I am able to start creating individual repositories that are needed and add a Unit of Work.  Stay tuned for the next installment in my path to creating a Repository Pattern against EF5.

    Read the article

  • Windows Azure CDN(Content Delivery Network)

    - by kaleidoscope
    Windows Azure CDN caches your Windows Azure blobs at strategically placed locations to provide maximum bandwidth for delivering your content to users. You can enable CDN delivery for any storage account via the Windows Azure Developer Portal. The CDN provides edge delivery only to blobs that are in public blob containers, which are available for anonymous access. Windows Azure CDN has 18 locations globally (United States, Europe, Asia, Australia and South America) and continues to expand. The benefit of using a CDN is better performance and user experience for users who are farther from the source of the content stored in the Windows Azure Blob service. In addition, Windows Azure CDN provides worldwide high-bandwidth access to serve content for popular events. Current CDN locations in US. For more details please refer to the link.  http://blogs.msdn.com/windowsazure/archive/2009/11/05/introducing-the-windows-azure-content-delivery-network.aspx Sarang

    Read the article

  • Internal HDs that don't contain the OS aren't accessable unless I try to manually browse them

    - by Hrafn
    So I have 4 internal hard drives, one that contains the OS (Ubuntu 12.04), all ext4. After starting the computer up, and without having tried to access the drives (File manager, terminal etc) it seems like the drives haven't been mounted. If I go into the "Disks" utility I see that the disks haven't been mounted. Programs that try to access the HD's during startup throw an error. For example my music player can't find the library, my note taking software can't find the database etc. But after opening the drive in a file manager everything works. I've checked SMART on all the disks and everything is a ok. Any and all ideas would be appreciated.

    Read the article

  • How to organize my site's file system properly?

    - by Wolfpack'08
    Doing some reading on Stack Overflow, I've found a lot of information suggesting that proper organization of a file system is crucial to a well-written web app. One of the key pieces of evidence is high-frequency references to "separation of concerns" in questions related to keeping programs organized. Now, I've found some information on organizing file systems (Filesystem Hierarchy Standard) from 2004. It raises only two concerns: first, the standard's a bit dated, so I believe it may be possible to do better given the changes in technology over the past 8 years; second, and most important, my application is very small compared to an entire Linux distro. I think that the file system should be organized very differently because of that. Here's what I'm looking at, currently: /scripts, /databases, /www -> /dev, /production -> login, router, admin pages, /sites -> content types, static pages /modules, /includes, /css, /media -> /module-specific-media

    Read the article

  • Best way of accessing data on different pages

    - by Gaz83
    I'm looking for a way to load data into properties/variables etc and have this information accessible to all the pages of my app. I want the information to be loaded via a background thread to keep UI thread free. Some of the pages will have various properties of their controls binding to these global properties. Here is what I tried. Created a static class. All pages could access the data but can't bind. Changed the static class to a Singleton and used DependencyProperty's. All pages could access data and binding worked fine but cross-threading issues when accessing via background threads. I have read in various places on this subject but haven't really come up with the best method yet for my situation.

    Read the article

  • Mounting ddrescue image after recovery (in over my head)

    - by BorgDomination
    I'm having problems mounting the recovery image. I've tried to mount the image multiple ways. quark@DS9 ~ $ sudo mount -t ext4 /media/jump1/1recover/sdb1.img /mnt mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so quark@DS9 ~ $ sudo mount -r -o loop /media/jump1/1recover/sdb1.img recover mount: you must specify the filesystem type quark@DS9 ~ $ sudo mount /media/jump1/1recover/sdb1.img mnt mount: you must specify the filesystem type It doesn't even give me detailed information on the file I just made, nautilus says it's 160gb. quark@DS9 ~ $ file /media/jump1/1recover/sdb1.img /media/jump1/1recover/sdb1.img: data quark@DS9 ~ $ mmls /media/jump1/1recover/sdb1.img Cannot determine partition type I'm not sure what I'm doing wrong or if I started this process incorrectly from the beginning. I've outlined what I've done so far below. I'm clueless, I'd appreciate if someone had some input for me. What I have done from the beginning My laptop has two hard drives. One has the dual boot Win7 / Linux Mint system files. Secondary one contained my /home folder. The laptop was jarred and the /home disk was broken. I tried a LiveCD recovery, it failed. Wouldn't even load a Live session with the disk installed. So I turned to ddrescue. quark@DS9 ~ $ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009fc18 Device Boot Start End Blocks Id System /dev/sda1 * 2048 112642047 56320000 7 HPFS/NTFS/exFAT /dev/sda2 138033152 312580095 87273472 83 Linux /dev/sda3 112644094 138033151 12694529 5 Extended /dev/sda5 112644096 132173823 9764864 83 Linux /dev/sda6 132175872 138033151 2928640 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/sdb: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002a8ea Device Boot Start End Blocks Id System /dev/sdb1 * 63 312576704 156288321 83 Linux Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xed6d054b Device Boot Start End Blocks Id System /dev/sdc1 63 1953520064 976760001 7 HPFS/NTFS/exFAT sda - 160g internal, holds all system files and all computer functions. sdb - 160g internal, BROKEN, contains about 140g of data I'd like to recover. sdc - 1T external, contains recovery image. Only place that has space to do all this. From this site, https://apps.education.ucsb.edu/wiki/Ddrescue I used this script to create an image of the broken hard drive. I changed the destination to the external USB drive. #!/bin/sh prt=sdb1 src=/dev/$prt dst=/media/jump1/1recover/$prt.img log=$dst.log sudo time ddrescue --no-split $src $dst $log sudo time ddrescue --direct --max-retries=3 $src $dst $log sudo time ddrescue --direct --retrim --max-retries=3 $src $dst $log Everything looked like it came off without a hitch: quark@DS9 ~ $ sudo bash recover1 Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 0 B, errsize: 0 B, errors: 0 Current status rescued: 160039 MB, errsize: 4096 B, current rate: 35588 B/s ipos: 3584 B, errors: 1, average rate: 22859 kB/s opos: 3584 B, time from last successful read: 0 s Finished 12.78user 1060.42system 1:56:41elapsed 15%CPU (0avgtext+0avgdata 4944maxresident)k 312580958inputs+0outputs (1major+601minor)pagefaults 0swaps Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 160039 MB, errsize: 4096 B, errors: 1 Current status rescued: 160039 MB, errsize: 1024 B, current rate: 0 B/s ipos: 1536 B, errors: 1, average rate: 13 B/s opos: 1536 B, time from last successful read: 1.3 m Finished 0.00user 0.00system 3:43.95elapsed 0%CPU (0avgtext+0avgdata 4944maxresident)k 238inputs+0outputs (3major+374minor)pagefaults 0swaps Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 160039 MB, errsize: 1024 B, errors: 1 Current status rescued: 160039 MB, errsize: 1024 B, current rate: 0 B/s ipos: 1536 B, errors: 1, average rate: 0 B/s opos: 1536 B, time from last successful read: 3.7 m Finished 0.00user 0.00system 3:43.56elapsed 0%CPU (0avgtext+0avgdata 4944maxresident)k 8inputs+0outputs (0major+376minor)pagefaults 0swaps It looks like, from where I'm standing it worked perfectly. Here's the log: # Rescue Logfile. Created by GNU ddrescue version 1.14 # Command line: ddrescue --direct --retrim --max-retries=3 /dev/sdb1 /media/jump1/1recover/sdb1.img /media/jump1/1recover/sdb1.img.log # current_pos current_status 0x00000600 + # pos size status 0x00000000 0x00000400 + 0x00000400 0x00000400 - 0x00000800 0x254314FC00 + I'm not sure how to proceed. Does this mean all of my data is lost???????? Appreciate ANY input!

    Read the article

  • Passing class names or objects?

    - by nischayn22
    I have a switch statement switch ( $id ) { case 'abc': return 'Animal'; case 'xyz': return 'Human'; //many more } I am returning class names,and use them to call some of their static functions using call_user_func(). Instead I can also create a object of that class, return that and then call the static function from that object as $object::method($param) switch ( $id ) { case 'abc': return new Animal; case 'xyz': return new Human; //many more } Which way is efficient? To make this question broader : I have classes that have mostly all static methods right now, putting them into classes is kind of a grouping idea here (for example the DB table structure of Animal is given by class Animal and so for Human class). I need to access many functions from these classes so the switch needs to give me access to the class

    Read the article

  • Service pack 1 on the way for Windows 7 and Windows Server 2008 R2

    - by John Breakwell
    On the MSMQ front, only two hotfixes are listed: 2028997 - FIX: Message Queuing may become unresponsive in Windows 7 or in Windows Server 2008 R2 974813 - FIX: You cannot send or receive messages by using Message Queuing 4.0 or Message Queuing 5.0 after you configure the BindInterfaceIP registry entry. from a total of 625 documented for the service pack. There may, of course, be undocumented changes where an update was not previously released separately and so has no associated KB article published. According to the Core Team, Volume Licensed, MSDN and TechNet subscribers get access February 16th, 2011. All customers get access February 22nd, 2011, through Windows Update and direct download So get ready to start testing.

    Read the article

  • New RUP Patch for iSupplier Portal, Sourcing and Supplier Lifecycle Management (SLM)

    - by LuciaC
    Just released - the 12.1.3 Rollup (RUP) Patch 17525552:R12.PRC_PF.B for iSupplier Portal, Sourcing and Supplier Lifecycle Management (SLM). Who should apply this patch? Anyone that is on Release 12.1.3 and is using  iSupplier Portal, Sourcing or Supplier Lifecycle Management (SLM) functionality. The following areas have had major fixes: Prospective Supplier Guided Navigation: The train-navigation is introduced for prospective supplier registration so that prospective suppliers can see all steps needed to successfully register themselves. Supplier Registration Workflow Enhancement: With this release, provided the Approval Management Engine (AME) action required notifications for supplier approval, so that all workflow related features can be enabled. Vacation rules can be set, approvals can be forwarded and more information can be requested through the notification itself.  Additionally AME parallel Approval support for Supplier Registration approvals has been added. Reinstate Supplier Request: Allow buyer to reopen/reinstate the rejected supplier. Supplier is able to access their previously rejected registration again and make changes and resubmit request. Contact Address Association: The prospective supplier is allowed to associate addresses with contacts (including Primary) during the prospective supplier registration process. Primary Contact Enhancement: The prospective supplier can be registered without creating a user account for the primary contact. Mandatory Attributes: In the negotiation requirement creation page, the lookup meaning of 'Internal' has been changed to 'Internal Optional', and a new lookup value with meaning as 'Internal Required' has been added. The values available in the 'Type' dropdown now are Display Only, Internal Optional, Internal Required, Supplier Optional and Supplier Required.  So now during supplier evaluations, internal user response can be set as mandatory by using Internal Required type during requirement creation. Notifications to Supplier:  When the supplier saves and submits their supplier registration request, then a notification with a registration status page link will be sent for further access.  When the buyer approves, rejects or returns the request, the supplier will be notified in an email with the current status. There are also 10 major enhancements included in this RUP. For information about this RUP; including, the fixes and enhancements included, how to access and apply the patch, performing an impact analysis on your system, and testing recommendations, see Doc ID 1591198.1.  Don’t delay apply the patch today!

    Read the article

  • after upgrade from 10.04 to 10.10, no keyboard, cannot login

    - by avar
    Hello, just did upgrade from kubuntu 10.04 to 10.10.. after all done and reboot, when the login box shows up, my keyboard and laptop pad ( mouse ) dosn't work, (plugged in the usb mouse, it works sometimes) but never keyboard. i went to recovery , the boot hung up when it says : [ 17.704053] EXT4-fs (sda9): mounted filesystem with ordered data mode Begin: Running /scripts/local-bottom ... Done. Done. Begin: Running /scripts/init-bottom ... Done. stuck here. nothing works except ctrl+alt+del i tried booting from livecd and update-grub, also tried booting manually from grub command line, everytime it stuck at lines above .. so it's not grub problem . how to solve this ? if it is important, i have ATI mobility radeon HD 5470 card .

    Read the article

  • About cdn architecture to route way

    - by Tony Lee
    Our web system, use the third-party cdn service. Assume that the user set the local dns with the googledns or opendns to visit our web sites, so cdn service will select the closest cdn proxy node. all right, but in fact the user's actual access position might outside there, cdn service may chose the one furthest away from the user node, so static resource access slower.. At present, my idea is if user local set dns server with googledns, and then first one we get the actual ip address of the user, tracerote to test a best routing lines, set up a cookie in user browser, and then set 302 header for response to jump to the which best cdn node. Whether the user's browser side traceroute tool can provide the best route decision-making ? Because we find that, once the user to set local dns server with the foreign network segment, for example : set dns with 8.8.8.8, so cdn routing will choose the foreign service node.

    Read the article

  • Ubuntu 12.04 PXELINUX does not boot RHEL Kernel and Initrd

    - by utpal
    I have successfully setup PXE server on Ubuntu 12.04 with DNSMASQ for DHCP Proxy Service, TFTPD-HPA for TFTP service, NFS-KERNEL-SERVER, APACHE2 and SYSLINUX for pxelinux.0 bootloaded needed for pxe boot using the following POST: http://ubuntuforums.org/showthread.php?t=1606910 I was successfully able to pxe boot a client to a Ubuntu 12.04 LIVE CD. Next, I want to PXE boot a client to a RHEL 6.5 x64 Kernel and initrd image. I dont want to install, just boot a client so that it mounts initrd and I can get a minimal filesystem on the client. How can I do that? Please Help!!

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >