Scripting interactive operations can be a challenge, but the expect command is tailor-made for this. Juliet Kemp offers some tips on using this handy scripting tool.
I am having problems with a NTFS disk mounted as a fuseblk in my ubuntu 12.10 through external usb3.
When I did a 1.1TB backup with rsync the speed was 1-2MB/s (wiht a ext4 disk speed was 70 MB/s before and after trying the NTFS disk). Also after one hour errors started to appear:
rsync: write failed on "xxx": No such file or directory
recv_files: "yyy" is a directory #but this file is a FILE not a dir ??!!
....
As this is the first time I have mounted the NTFS in linux for heavy usage (the data would be used in windows afterwards), I would like to know if this kind of thinks are common o was only that something became unstable in my system and a simply restart would probably have solved it.
This leads me to the these questions:
Can I trust fuse for manage NTFS disks?
Or is a problem of the NTFS tools in linux not yet totally stables for writing?
Do people is still suffering from low performance with fuse-NTFS vs ext4 (in the past I have read about people complaining about this)?
Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter
Solution Summary
College of American Pathologists Goes Live with OracleWebCenter - Imaging, AP Invoice Automation, and EBS Managed Attachment with Support for Imaging ContentThe College of American Pathologists (CAP),
the leading organization of board-certified pathologists serving more then 18,000
physician members, 7,000 laboratories are accredited by the CAP, and
approximately 22,000 laboratories are enrolled in the College’s
proficiency testing
programs.
The business objective was to content-enable their Oracle E-Business
Suite (EBS) enterprise application by combining the best of Imaging
and Manage Attachment functionality providing a unique opportunity for
the business to have unprecedented access to both structure and
unstructured content from within their enterprise application.
The solution improves customer services turnaround time, provides better compliance and improves maintenance and management of the technology infrastructure.
Company OverviewThe College of American Pathologists (CAP), celebrating 50 years as the gold standard in laboratory accreditation, is a medical society serving more than 17,000 physician members and the global laboratory community. It is the world’s largest association composed exclusively of board certified pathologists and is the worldwide leader in laboratory quality assurance. The College advocates accountable, high-quality, and cost-effective patient care. The more than 17,000 pathologist members of the College of American Pathologists represent board-certified pathologists and pathologists in training worldwide. More than 7,000 laboratories are accredited by the CAP, and approximately 23,000 laboratories are enrolled in the College’s proficiency testing programs.
Business ChallengesThe CAP business objective was to content-enable their Oracle E-Business Suite (EBS) enterprise application by combining the best of Imaging and Manage Attachment functionality providing a unique opportunity for the business to have unprecedented access to both structure and unstructured content from within their enterprise application.
Bring more flexibility to systems and programs in order to adapt quickly
Get a 360 degree view of the customer
Reduce cost of running the business
Solution DeployedWith the help of Oracle Consulting, the customer implemented Oracle WebCenter Content as the centralized E-Business Suite Document Repository. The solution enables to capture, present and manage all unstructured content (PDFs,word processing documents, scanned images, etc.) related to Oracle E-Business Suite transactions and exposing the related content using the familiar EBS User Interface.
Business ResultsThe CAP achieved following benefits from the implemented solution:
Managed Attachment Solution
Align with strategic Oracle Fusion Middleware platform
Integrate with the CAP existing data capture capabilities
Single user interface provided by the Managed Attachment solution for all content
Better compliance and improved collaboration
Account Payables Invoice Processing Imaging Solution
Automated invoice management eliminating dependency on paper materials and improving compliance, collaboration and accuracy
A single repository to house and secure scanned invoices and all supplemental documents
Greater management visibility of invoice entry process
Additional Information
CAP OpenWorld Presentation
Oracle WebCenter Content
Oracle Webcenter Capture
Oracle WebCenter Imaging
Oracle Consulting
I have just installed Ubuntu on a machine that previously had XP installed on it. The machine has 2 HDD (hard disk drives). I opted to install Ubuntu completely over XP.
I am new to Linux, and I am still learning how to navigate teh file structure. However, AFAICT), there is only one drive. I want to be able to store programs etc on the first drive, and store data (program output etc) on the second drive.
It appears Ubuntu is not aware that I have 2 drives (on XP, these were drives C and D).
How can I mount the second drive (ideally, I want to do this automatically on login, so that the drive is available to me whenever I login - withou manual intervention from me)
In XP, I could refer to files on a specific drive by prefixing with the drive letter (e.g. c:\foobar.cpp and d:\foobar.dat). I suspect the notation on ubuntu is different. How may I specify specific files on different drives?
Last but notbthe least (a bit unrelated to previous questions). This relates to direcory structure again. I am a developer (C++ for desktops and PHP for websites), I want to install the following apps/ libraries.
i). Apache 2.2
ii). PHP 5.2.11
iii). MySQL (5.1)
iv). SVN
v). Netbeans
vi). C++ development tools (gcc, gdb, emacs etc)
vii). QT toolkit
viii). Some miscellaeous scientific software (e.g. www.r-project.org, www.gnu.org/software/octave/)
I would be grateful if a someone can recommend a directory layout for these applications. Regarding development, I would also be grateful if someone could point out where to store my project and source files i.e:
(i) *.cpp, *.hpp, *.mak files for cpp projects
(ii) individual websites
On my XP machine the layout for C++ dev was like this:
c:\dev\devtools (common libs and headers etc)
c:\dev\workarea (root folder for projects)
c:\dev\workarea\c++ (c++ projects)
c:\dev\workarea\websites (web projects)
I would like to create a similar folder structure on the linux machine, but its not clear whether to place these folders under /, /usr, /home or swomewhere else (there seems to be abffling number of choices, so I want to get it "right" first time - i.e having a directory structure that most developer use, so it is easier when communicating with other ubuntu/linux developers)
Lately, we've become aware of a TCP connection issue that is mostly limited to mac and Linux users who browse our websites.
From the user perspective, it presents itself as a really long connection time to our websites (11 seconds).
We've managed to track down the technical signature of this problem, but can't figure out why it is happening or how to fix it.
Basically, what is happening is that the client's machine is sending the SYN packet to establish the TCP connection and the web server receives it, but does not respond with the SYN/ACK packet. After the client has sent many SYN packets, the server finally responds with a SYN/ACK packet and everything is fine for the remainder of the connection.
And, of course, the kicker to the problem: it is intermittent and does not happen all the time (though it does happen between 10-30% of the time)
We are using Fedora 12 Linux as the OS and Nginx as the web server.
"I tried to make this easy to read through"
I am using Ubuntu 12.04 LTS for Magento and installed these commands onto the system:
sudo apt-get install apache2
sudo apt-get install php5 libapache2-mod-php5
sudo apt-get install php5-mysql
sudo apt-get install php5-curl php5-mcrypt php5-gd php5-common
sudo apt-get install php5-gd
I used Windows Server 2008 R2 August 2012 for Mysql Server
For a reference, I used http://www.windowsazure.com/en-us/manage/windows/common-tasks/install-mysql/
When the server was setup, I added an empty disk to it
Then, I added endpoints 3306
Next I accessed the server remotely
After that, I formatted the empty disk and was inserted as F:
Next I downloaded Mysql from http://*.mysql.com version Windows (x86, 64-bit), MSI Installer 5.5.28
In the installation process, I used these settings:
Typical Setup - Clicked Next, install, next
Chose Detailed Configuration - Clicked next
Chose Dedicated MySQL Server Machine - Clicked Next
Chose Transactional Database Only - Clicked Next
Chose the "F:" Drive - Clicked Next
Chose Online Transactional Processing (OLTP) - Clicked Next
For Networking Options, I checkmarked 'Enable TCP/IP Networking" 'Add firewall exception for this port' 'Enable Strict Mode' - Clicked Next
Chose Standard Character Set - Clicked Next
For Windows Options, I checkedmarked 'Install as Window Service" 'Launch the MySQL Server automatically' 'Include Bin Directory in Windows PATH - Clicked Next
For Security Options, I checkmarked 'Modify Security Settings' and set root password - Clicked Next
Finally clicked Execute and Finish
These are the Firewall Setting that I set
I clicked inbound rules
Properties
Scope
Allow IP Address and used the internal Address for Magento Server
Clicked Apply and exited
Next, I opened up MySQL 5.x Command Line Client
Entered Root Password
Then entered these commands
mysql create database magento;
mysql Create user magentouser identified by 'password';
mysql Grant select, insert, create, alter, update, delete, lock tables on magento.* to magentouser
mysql exit
Finally, I opened up the Magento Downloader
Magento validation has approved all
PHP version is right. Your version is 5.3.10-1ubuntu3.4.
PHP Extension curl is loaded
PHP Extension dom is loaded
PHP Extension gd is loaded
PHP Extension hash is loaded
PHP Extension iconv is loaded
PHP Extension mcrypt is loaded
PHP Extension pcre is loaded
PHP Extension pdo is loaded
PHP Extension pdo_mysql is loaded
PHP Extension simplexml is loaded
These are all installed on Magento Server
For the Database Connection, I used:
The Database server only has MySQL 5.5 Server installed on it
Host - Internal IP address
User Name - The User I created when setting up database
Password - The Password I created when setting up database
For the password, I did some research and found out that Magento only accepts alphanumeric, so I went and set it up again and used only alphanumeric for the User password
Now, I am still getting Accessed denied for database Connection.
Also, I have tryed to setup mysql on independant Linux Server but kept getting errors. When, I found the solution. Wouldn't work, so I decided to try Windows.
These is the questions, I have been asking and researching to debug this issue
Is it because I am using Linux for magento and Windows for Database. I have had no luck in finding a reason why this wouldn't work
There must be something, I am missing
I also researched the difference between linux sql databases and windows sql databases but have not come to conclusion, if installing Mysql on windows would make a difference in syntax and coding.
I have spent a lot of time looking into this and need some help with direction on how to complete my project. Any type of help would be appreciated.
Hi,
I'm working for an important company which has some severe network policies. I'd like to connect from my work, to my home linux server (mainly because it allows me to monitor my home-automated installation, but that's off-topic) but of course, any ssh connection (tcp port 22) to an external site is blocked. While I understand why this is done (to avoid ssh tunnels I guess), I really need to have some access to my box.
(Well, "need" might be exagerated, but that would be nice ;)
Do you know any web-based solution that I could install on my home linux server that would give me some pseudo-terminal (served using https) embedded in a web page ? I'm not necessarily looking for something graphical: a simple web-embedded ssh console would do the trick.
Or do you guys see any other solution that wouldn't compromise network security ?
Thank you very much for your solutions/advices.
Getting started:
To get started, logon to Windows XP and click Start –> then right click ‘My Computer’ and select ‘Properties’.
Then select ‘Computer Name’ tab and click ‘Change’
Enter the Computer and Workgroup name and click OK. Make sure all systems use the same Workgroup name. You will have to restart your computer for the change to take effect.
After restarting, click Start –> Control Panel.
Select Security Center –> Windows Firewall.
When Windows Firewall opens, select ‘Exceptions’ tab and check the box to enable File and Printer Sharing. Close out when done.
Next, logon to Fedora and go to System –> Administration –> Add/Remove Software.
Then search for and install system-config-samba. Install all additional packages when prompted.
Ensure that the Network Settings along with Correct Gateway is Mentioned so that your System can
Access the Internet.
system-config-samba
After installing, go to System –> Administration –> Samba.
Then select Preferences –> Server Settings.
Enter the Workgroup name here and click OK.
Select Preferences –> Samba Users.
Edit or Add User to samba database and click OK.
To create shares, click File –> Create Add Shares, then select the folder you wish to share and check:
Writable
Visible
Then select ‘Access’ tab and give users access to the shares, then click OK to save.
Next, go to System –> Administration –> Firewall.
Select ‘Samba’ under ‘Trusted Services’ and enable Samba.
Next, select ‘ICMP’ and enable ‘Echo Reply (pong) and Echo Request (ping)’
Also add the eth0 interface to the trusted interfaces.
After that go to Applications –> System Tools –> Terminal and run the command below:
su -c 'chkconfig smb on'
Restart your computer and if everything is setup correctly, you should be able to view shares from either system.
At the terminal:
Quote:
su
setenforce 0
service smb restart
service nmb restart
exit
ENJOYYY....
I am running alfresco community edition 3.4c on a debian linux. I have problems getting the kerberos authentication in order. The biggest problem is that do not seem to have any sort of user logs.
what i am using already:
log4j.logger.org.alfresco.web.app.servlet.KerberosAuthenticationFilter=debug
log4j.logger.org.alfresco.repo.webdav.auth.KerberosAuthenticationFilter=debug
log4j.logger.org.alfresco.smb.protocol=debug
log4j.logger.org.alfresco.fileserver=debug
I've also checked if the users actually reach the server, and they do, (also on a linux firefox outside of domain, i seem to be able to log in).
Can anyone help me get more user logging?
I am trying to build Linux From Scratch, and now I am at chapter 5.4, which tells me how to build Binutils. I have binutils 2.20's source code, but when I try to build it:
time { ./binutils-2.20/configure --target=$LFS_TGT --prefix=/tools --disable-nls --disable-werror ; }
it gives me an error:
checking build system type... i686-pc-linux-gnu
checking host system type... i686-pc-linux-gnu
checking target system type... i686-lfs-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether ln works... yes
checking whether ln -s works... yes
checking for a sed that does not truncate output... /bin/sed
checking for gawk... gawk
checking for gcc... GCC
checking for C compiler default output file name...
configure: error: in `/media/LFS':
configure: error: C compiler cannot create executables
See `config.log' for more details.
You can see my config.log at pastebin.com: http://pastebin.com/hX7v5KLn
I have just installed Ubuntu 10.04, and reinstalled GCC and installed G++. Also, the build is done by a non-root, non-admin user called 'lfs' (which is also described in Linux From Scratch), and on a different partition than where the system is installed.
Can anyone help me? Thanks
The Exadata & Manageability Partner Communities will be
celebrating a Community Forum in San Francisco during Oracle Openworld.
The session will take place on Monday, October 1st, from 4:00 - 6:00 pm local time.If
you would like to present an experience around a customer project or sales best practice in the Manageability or Quality & Testing areas, please
contact [email protected] with a short description of your proposal.
In my previous blog I described the steps to get OpenStack on Solaris up and running. Now we'll explore how WebLogic and OpenStack can work together to deliver truly elastic Middleware Platform as a Service.
Middleware / Platform as a Service goals
First, let's define what PaaS should be : PaaS offerings facilitate the deployment of applications without the complexity of managing the underlying hardware and software and provisioning hosting capabilities.
To break it down:
- PaaS provides a complete platform for hosting solutions (Java EE, SOA, BPM, ...)
- Infrastructure provisioning (virtual machine, OS, platform) and managing is hidden from the PaaS user [administrator or developer]
- Additionally, PaaS could / should define target SLAs, and the platform should ensure the SLAs are meet automatically.
PaaS use case
To make it more tangible, we have an IT Administrator who has the requirement to deploy a Java EE enterprise application. The application is used by external users who need to submit reports by the end of each month. As a result, the number of concurrent users will fluctuate, with expected huge spikes around the end of each month.
The SLA agreed by the management is that no more than 100 requests should be waiting to be processes at any given time. In addition, the IT admin has no more than 3 days to have the platform and the application operational.
The Challenges
Some of the challenges the IT Administrator is facing are:
- how are we going to ensure the processing power?
- how are we going to provision the (virtual) machines, Java EE platform and deploy the application?
- how are we going to monitor the SLA?
- how are we going to react to SLA, and increase capacity?
The Ideal Solution
Ideally, the whole process should be automated, "set it and forget" and require no human interaction:
- The vendor packages the solution as deployable image(s)
- The images are deployed to the IaaS
- From there, automated processes take care of SLA
Solution Architecture with WebLogic 12c, Dynamic Clusters, OpenStack & Solaris
OracleSolaris provides OS and virtualisation through Solaris Zones
OpenStack is a part of Solaris 11.2 and provides Cloud Management (console and API)
WebLogic 12c with Dynamic Clusters provides the Platform
Trafic Manager provides load balancing
On top of out that, we are going to implement a small control script - Cloud Manager - which is going to monitor SLA through WebLogic Diagnostic Framework. In case there are more than 100 pending requests, the script will:
- provision a new virtual machine based on image which is configured for the WebLogic domain
- add the machine to WebLogic domain
- Increase the number of servers in dynamic cluster
- Start the newly provisioned server
Stay tuned for part II
The hole solution with working demo will be presented in one of our Partner WebCasts in June, exact date TBA.
Jernej Kaše is a Fusion Middleware Specialist working closely with Oracle Partners in the ECEMEA region to grow their business by leveraging Oracle technology.
I'm having some issues with networking on a new Linux server I'm building. The OS is SLES 11. When booting into runlevel 1, I see that eth0 is showing an IP. Physically, there is a network cable plugged into the card associated with eth1, and then there is a network cable plugged into a QLogic iSCSI card (eth4, not shown). I've been troubleshooting this for awhile, and it seems like eth0 is somehow getting assigned an IP, even though it isn't configured in Linux or even plugged into the network for that matter. Thoughts?
ifconfig -a
Here is the ifconfig output
(Sorry, I need more rep before I can post images on SF...)
The new survey from the Independent Oracle Users Group (IOUG) titled "Closing the Security Gap: 2012 IOUG Enterprise Data Security Survey," uncovers some interesting trends in IT security among IOUG members and offers recommendations for securing data stored in enterprise databases. "Despite growing threats and enterprise data security risks, organizations that implement appropriate detective, preventive, and administrative safeguards are seeing significant results," finds the report's author, Joseph McKendrick, analyst, Unisphere Research. Produced by Unisphere Research and underwritten by Oracle, the report is based on responses from 350 IOUG members representing a variety of job roles, organization sizes, and industry verticals. Key findings include
Corporate budgets increase, but trailing. Though corporate data security budgets are increasing this year, they still have room to grow to reach the previous year’s spending. Additionally, more than half of respondents say their organizations still do not have, or are unaware of, data security plans to help address contingencies as they arise.
Danger of unauthorized access. Less than a third of respondents encrypt data that is either stored or in motion, and at the same time, more than three-fifths say they send actual copies of enterprise production data to other sites inside and outside the enterprise.
Privileged user misuse. Only about a third of respondents say they are able to prevent privileged users from abusing data, and most do not have, or are not aware of, ways to prevent access to sensitive data using spreadsheets or other ad hoc tools.
Lack of consistent auditing. A majority of respondents actively collect native database audits, but there has not been an appreciable increase in the implementation of automated tools for comprehensive auditing and reporting across databases in the enterprise.
IOUG RecommendationsThe report's author finds that securing data requires not just the ability to monitor and detect suspicious activity, but also to prevent the activity in the first place. To achieve this comprehensive approach, the report recommends the following.
Apply an enterprise-wide security strategy. Database security requires multiple layers of defense that include a combination of preventive, detective, and administrative data security controls.
Get business buy-in and support. Data security only works if it is backed through executive support. The business needs to help determine what protection levels should be attached to data stored in enterprise databases.
Provide training and education. Often, business users are not familiar with the risks associated with data security. Beyond IT solutions, what is needed is a well-engaged and knowledgeable organization to help make security a reality.
Read the IOUG Data Security Survey Now.
I have to buy one backup software to backup a vmware environment with the following server/applications:
mixed microsoft windows 2003/2008/2012 server standard environment
sql server 2005/2008
mixed linux centos/ubuntu servers
postgresql
sap environment
exchange 2007
linux fileservers
windows fileservers
active directory
random applications/sqlserver/fileserver on workstations xp/7/8
my hardware is: 5 blades on ibm bladecenter, various san, lto4 on 4gbit fiber channel connected to a windows2003 blade where I will install the backup software (backupexec or arcserve).
What are your advice and comments over backupexec vray or arcserve choice ?
I know that arcserve have a lower price.
I used backup exec for some years but I found it pretty complicated.
Thank you.
I try to sync two directories using rsync.
the source is on Linux, and the other is on windows.
So, I mount the directory on windows using the command mount -t cifs .....
in Linux system.
Then I execute rsync ....
Everything is OK, but rsync prints out
rsync: chown "/mnt/windows/A/." failed: Permission denied (13)
rsync: chown "/mnt/windows/A/readme.txt" failed: Permission denied (13)
I want to sync the directories without changing ownership.
How can I do? please let me know.
Thanks in advance.
A Day in the Life of an OpenWorld Attendee Part I
Lots of people are blogging insightfully about OpenWorld so I thought I would provide some non-insightful remarks to buck the trend!
With 50,000 attendees I didn’t expect to bump into too many people I knew, boy was I wrong! I walked into the registration area and immediately was hailed by a couple of customers I had worked with a few months ago. Moving to the employee registration area in a different hall I bumped into a colleague from the UK who was also registering. As soon as I got my badge I bumped into a friend from Ireland! So maybe OpenWorld isn’t so big after all!
First port of call was Larrys Keynote. As always Larry was provocative and thought provoking. His key points were announcing the Oracle cloud offering in IaaS, PaaS and SaaS, pointing out that Fusion Apps are cloud enabled and finally announcing the 12c Database, making a big play of its new multi-tenancy features. His contention was that multi-tenancy will simplify cloud development and provide better security by providing DB level isolation for applications and customers.
Next day, Monday, was my first full day at OpenWorld. The first session I attended was on monitoring of OSB, very interesting presentation on the benefits achieved by an Illinois area telco – US Cellular. Great discussion of why they bought the SOA Management Packs and the benefits they are already seeing from their investment in terms of improved provisioning and time to market, as well as better performance insight and assistance with capacity planning.
Craig Blitz provided a nice walkthrough of where Coherence has been and where it is going.
Last night I attended the BOF on Managed File Transfer where Dave Berry replayed Oracles thoughts on providing dedicated Managed File Transfer as part of the 12c SOA release. Dave laid out the perceived requirements and solicited feedback from the audience on what if anything was missing. He also demoed an early version of the functionality that would simplify setting up MFT in SOA Suite and make tracking activity much easier.
So much for Day 1. I also ran into scores of old friends and colleagues and had a pleasant dinner with my friend from Ireland where I caught up on the latest news from Oracle UK. Not bad for Day 1!
I am trying to install nginx on my server, however it keeps returning "./configure: error: perl 5.6.1 or higher is required" eventhough I have perl v5.8.8!!!!
I have already downloaded perl and trying to configure it using the following command :
./configure --with-http_stub_status_module --with-http_perl_module --with-http_flv_module --add-module=nginx_mod_h264_streaming
here is the output :
[root@fst nginx-0.8.55]# ./configure --with-http_stub_status_module --with-http_perl_module --with-http_flv_module --add-module=nginx_mod_h264_streaming
checking for OS
+ Linux 2.6.18-308.el5 x86_64
checking for C compiler ... found
+ using GNU C compiler
+ gcc version: 4.1.2 20080704 (Red Hat 4.1.2-52)
checking for gcc -pipe switch ... found
checking for gcc builtin atomic operations ... found
checking for C99 variadic macros ... found
checking for gcc variadic macros ... found
checking for unistd.h ... found
checking for inttypes.h ... found
checking for limits.h ... found
checking for sys/filio.h ... not found
checking for sys/param.h ... found
checking for sys/mount.h ... found
checking for sys/statvfs.h ... found
checking for crypt.h ... found
checking for Linux specific features
checking for epoll ... found
checking for sendfile() ... found
checking for sendfile64() ... found
checking for sys/prctl.h ... found
checking for prctl(PR_SET_DUMPABLE) ... found
checking for sched_setaffinity() ... found
checking for crypt_r() ... found
checking for sys/vfs.h ... found
checking for nobody group ... found
checking for poll() ... found
checking for /dev/poll ... not found
checking for kqueue ... not found
checking for crypt() ... not found
checking for crypt() in libcrypt ... found
checking for F_READAHEAD ... not found
checking for posix_fadvise() ... found
checking for O_DIRECT ... found
checking for F_NOCACHE ... not found
checking for directio() ... not found
checking for statfs() ... found
checking for statvfs() ... found
checking for dlopen() ... not found
checking for dlopen() in libdl ... found
checking for sched_yield() ... found
checking for SO_SETFIB ... not found
configuring additional modules
adding module in nginx_mod_h264_streaming
+ ngx_http_h264_streaming_module was configured
checking for PCRE library ... found
checking for system md library ... not found
checking for system md5 library ... not found
checking for OpenSSL md5 crypto library ... found
checking for zlib library ... found
checking for perl
+ perl version: v5.8.8 built for x86_64-linux-thread-multi
./configure: error: perl 5.6.1 or higher is required
Oracle Virtual Desktop Infrastructure 3.4.1 has been released to complete the storage matrix below.
Storage Type
VirtualBox on Solaris
VirtualBox on Enterprise Linux
Sun ZFS
yes
yes
Sun ZFS (pool on Solaris)
yes
yes
iSCSI
-
new in VDI 3.4
Network File System
new in VDI 3.4.1
new in VDI 3.4
Local Storage
new in VDI 3.4.1
new in VDI 3.4