Search Results

Search found 30894 results on 1236 pages for 'best practice'.

Page 784/1236 | < Previous Page | 780 781 782 783 784 785 786 787 788 789 790 791  | Next Page >

  • Option do not show schema names in SQL Server Management Studio

    - by Jörgen Sigvardsson
    I love the fact that tree and list controls in Windows allows incremental searches. Just select a starting point, and type, and the control will select the best matching node for you. This works in SSMS, but there's an annoying problem, especially so in the table node. SSMS prefixes all table names with the schema name and a dot. To make an incremental search here, I have to type 'dbo.', followed by whatever I'm searching for. Is there an option to turn off this table name representation in SSMS? I hope I'm asking this on the right stack exchange site. If you feel it's off base, let me know!

    Read the article

  • How to represent an agile project to people focused on waterfall [closed]

    - by ahsteele
    Our team has been asked to represent our development efforts in a project plan. No one is unhappy with our work or questioning our ability to deliver, we are just participating in an IT cattle call for project plans. Trouble is we are an agile team and haven't thought about our work in terms of a formal project plan. While we have a general idea of what we are working on next we aren't 100% sure until we plan an iteration. Until now our team has largely operated in a vacuum and has not been required to present our methodology or metrics to outside parties. We follow most of the practices espoused in Extreme Programming. We hold quarterly planning meetings to have a general idea of the stories we are going to work on for a quarter. That said, our stories are documented on 3x5 cards and are only estimated at the beginning of the iteration in which they are going to be worked. After estimation we document the story in Team Foundation Sever. During an iteration, we attach code to stories and mark stories as completed once finished. From this data we are able to generate burn down and velocity charts. Most importantly we know our average velocity for an iteration keeping us from biting off more than we can chew. I am not looking to modify the way we do development but want to present our development activities in a report that someone only familiar with waterfall will understand. In What Does an Agile Project Plan Look Like, Kent McDonald does a good job laying out the differences between agile and waterfall project plans. He specifies the differences in consumable bullets: An agile project plan is feature based An Agile Project Plan is organized into iterations An Agile Project Plan has different levels of detail depending on the time frame An Agile Project Plan is owned by the Team Being able to explain the differences is great, but how best to present the data?

    Read the article

  • Configure postfix to filter email into hold queue

    - by Ian
    Hey, I would like postfix to send all emails received on SMTP off to an external process, which will decide whether to allow them through as normal, or whether to put them into the hold queue (or another quarantine area), where they have to wait for admin approval. I was thinking of doing this with an after-queue content filter, which uses pipe(8) to run a script on each message, and the script itself will spawn "postsuper -h " if it decides to put the message on hold. Then the admin can do postsuper -d or -r to delete or pass the message on as appropriate. So, my questions are - a) will this work, and b) is this the best way to do it? Would a milter or another type of content filter be a better approach?

    Read the article

  • Priority Manager&ndash;Part 1- Laying out the plan

    - by Patrick Liekhus
    Now that we have shown the EDMX with XPO/XAF and how use SpecFlow and BDD to run EasyTest scripts, let’s put it all together and show the evolution of a project using all the tools combined. I have a simple project that I use to track my priorities throughout the day.  It uses some of Stephen Covey’s principles from The 7 Habits of Highly Effective People.  The idea is to write down all your priorities the night before and rank them.  This way when you get started tomorrow you will have your list of priorities.  Now it’s not that new things won’t appear tomorrow and reprioritize your list, but at least now you can track them.  My idea is to create a project that will allow you manage your list from your desktop, a web browser or your mobile device.  This way your list is never too far away.  I will layout the data model and the additional concepts as time progresses. My goal is to show the power of all of these tools combined and I thought the best way would be to build a project in sequence.  I have had this idea for quite some time so let’s get it completed with the outline below. Here is the outline of the series of post in the near future: Part 2 – Modeling the Business Objects Part 3 – Changing XAF Default Properties Part 4 – Advanced Settings within Liekhus EDMX/XAF Tool Part 5 – Custom Business Rules Part 6 – Unit Testing Our Implementation Part 7 – Behavior Driven Development (BDD) and SpecFlow Tests Part 8 – Using the Windows Application Part 9 – Using the Web Application Part 10 – Exposing OData from our Project Part 11 – Consuming OData with Excel PowerPivot Part 12 – Consuming OData with iOS Part 13 – Consuming OData with Android Part 14 – What’s Next I hope this helps outline what to expect.  I anticipate that I will have additional topics mixed in there but I plan on getting this outline completed within the next several weeks.  Thanks

    Read the article

  • Today @ OOW: Identity Management for the SoMoClo world

    - by B Shashikumar
    Today at OpenWord, we have a very interesting lineup of Identity Management sessions that discuss how to extend identity management securrley to cloud, mobile and social ecosystems. Here are 3 of the can’t miss identity management sessions today: Identity Management and the Cloud: Security is regularly identified as the #1 barrier to cloud service adoption. Oracle Identity Management is designed to help customers extend and connect core identity services to SaaS applications and systems. This session explores how organizations are using Oracle Identity Management with cloud services and how some customers are offering identity management as a cloud service. Real-time External Authorization for Applications, Middleware and Databases: Externalization of authorization is key to manageability and audit. This session covers enterprise wide authorization solution deployment best practices and real-world examples of using Oracle Entitlements Server—the one-stop standards-compliant authorization solution—for middleware, applications, and data. Delivering Secure WiFi on the Tube as an Olympics Legacy from London 2012: In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. Here is the complete lineup of Identity Management sessions today at OOW.

    Read the article

  • organizing images by resolution with batch files

    - by Anthony
    Doing some digging I'm trying to figure out a command line solution for organizing very large archives of images based on their resolution into folders, 1920x1080, 1600x1200, 1600x900, etc. I've come across a few post on Superuser mentioning something called ImageMagick, is that the best method to the madness I'm trying to accomplish? I've never used any command line functions/applets/tools other then those that come from Microsoft. I'm rather new to command line usage but ive been enjoying the hell out of it using Powershell, xcopy and robocopy. I am slowly trying to push myself further into the Linux world with Ubuntu running on one of my physical machines as well as a virtual machine so that's an option as well.

    Read the article

  • How to go from Mainframe to the Cloud?

    - by Ruma Sanyal
    Running applications on IBM mainframes is expensive, complex, and hinders IT responsiveness. The high costs from frequent forced upgrades, long integration cycles, and complex operations infrastructures can only be alleviated by migrating away from a mainframe environment.  Further, data centers are planning for cloud enablement pinned on principles of operating at significantly lower cost, very low upfront investment, operating on commodity hardware and open, standards based systems, and decoupling of hardware, infrastructure software, and business applications. These operating principles are in direct contrast with the principles of operating businesses on mainframes. By utilizing technologies such as Oracle Tuxedo, Oracle Coherence, and Oracle GoldenGate, businesses are able to quickly and safely migrate away from their IBM mainframe environments. Further, running Oracle Tuxedo and Oracle Coherence on Oracle Exalogic, the first and only integrated cloud machine on the market, Oracle customers can not only run their applications on standards-based open systems, significantly cutting their time to market and costs, they can start their journey of cloud enabling their mainframe applications. Oracle Tuxedo re-hosting tools and techniques can provide automated migration coverage for more than 95% of mainframe application assets, at a fraction of the cost Oracle GoldenGate can migrate data from mainframe systems to open systems, eliminating risks associated with the data migration Oracle Coherence hosts transactional data in memory providing mainframe-like data performance and linear scalability Running Oracle software on top of Oracle Exalogic empowers customers to start their journey of cloud enabling their mainframe applications Join us in a series of events across the globe where you you'll learn how you can build your enterprise cloud and add tremendous value to your business. In addition, meet with Oracle experts and your peers to discuss best practices and see how successful organizations are lowering total cost of ownership and achieving rapid returns by moving to the cloud. Register for the Oracle Fusion Middleware Forum event in a city new you!

    Read the article

  • Should USB controller drivers be updated?

    - by Coldblackice
    Should USB controller drivers be updated? In Windows 7, many of the core system devices/buses/controllers still use stock Microsoft drivers dating back to 2006 -- things such as USB controllers and the PCI bus (and subsequent devices in the tree). Should these be getting updated? I would think that surely there would have been improvements/fixes/updates to some or any of these drivers by now, now over 7 years old. Does Windows know "best" in this case, updating these respective drivers when it's needed, or should the user be keeping an eye out for drivers in this "vein" of devices (meaning integrated/core/non-pluggable system devices that are built into the motherboard)?

    Read the article

  • Oracle Java Olympics Between Russia, Ukraine, Belarus, Ukraine and Kazakhstan

    - by Tori Wieldt
    Last month, 151 universities in 11 locations (Saint-Petersburg, Moscow, Donetsk, Tomsk, Odessa, Rostov-on-Don, Ekaterinburg, Khabarovsk, Almaty, Kiev, and Samara) competed in the second round of the Oracle Java Olympics. For two weeks in February, the best university students from Russia, Ukraine, Belarus, Ukraine and Kazakhstan were invited to compete with each other and prove just how good they are in Java programming.  A team of engineers from Oracle Development center in Saint-Petersburg prepared the set of problems to solve during the competition. To win, participants needed to show deep knowledge of Java technologies from Classloader and NIO to Reflection and JavaDB. Students in each location had a PC with Oracle JDK 1.7u2 and Netbeans 7.1.  As a testing system, the organizers used the open source software Ejudge (with several tweaks specifically for the competition).  Participants submitted their solutions to the remote server where they were tested by prepared test harnesses. All results were posted in real-time. "I followed the competition coming in from the many sites, and it was a really exciting experience, like a horse race or football game!" exclaimed Java Evangelist Alexander Belokrylov. Congratulations to everyone who competed! The Olympic finals will on April 4th. 

    Read the article

  • Reccomendation for tuning 100's of Sql Databases

    - by wayne
    Hi, I'm running several sql servers, each running a few hundred multi gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite alot from database to database. What would be the best way to auto-index / profile / tune this large amount of databases? As there are atleast 600 or more catalogs i cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine!

    Read the article

  • SELinux adding new allowed samba type to access httpd_sys_content_t?

    - by Josh
    allow samba_share_t httpd_sys_content_t {read execute getattr setattr write}; allow smbd_t httpd_sys_content_t {read execute getattr setattr write}; I am taking a stab in the dark with resources I've looked at, at various places that the above policies are what I want. I basically want to allow Samba to write to my web docs without giving it free access to the operating system. I read a post by a NSA rep saying the best way was defining a new type and allowing both samba and httpd access. Setting the content to public content (public_content_rw_t) does not work without making use of some unrestrictive booleans. To state this in short, how do I allow samba to access a new type?

    Read the article

  • aligning truecrypt partition on 1.5TB 4kB sector drive

    - by pQd
    hi, aligning partitions to start at real physical sector of ssds / stripped raids / 4kB drives is a 'good thing to do'. but i've run into a problems when trying to do it for a truecrypt partition that will contain ext3 on it. or so it seems. when drive is question is partitioned properly and formatted with ext3 i get very reasonable write speeds around 70-80MB/s, but when i put truecrypt and ext3 on the top of it write performance becomes very unstable and goes between 1-25MB/s with very high io-wait. on the same server i dont have any performance issues with ext3 on the top of truecrypt on regular 512B-sector 500GB sata disks. so my best guess is that iowaits are caused by misalignment but i cannot really find reliable information on how to calculate optimal partition beginning. i've tried to start it at 128 logical sector, i've also tried 8132 sector as suggested here but both gave me very bad and unstable performance. do you have any experience with similar setup? thanks!

    Read the article

  • How do I add a shapefile in ArcGIS via python scripting?

    - by Tom W
    I am trying to automate various tasks in ArcGIS Desktop (using ArcMap generally) with Python, and I keep needing a way to add a shape file to the current map. (And then do stuff to it, but that's another story). The best I can do so far is to add a layer file to the current map, using the following ("addLayer" is a layer file object): def AddLayerFromLayerFile(addLayer): import arcpy mxd = arcpy.mapping.MapDocument("CURRENT") df = arcpy.mapping.ListDataFrames(mxd, "Layers")[0] arcpy.mapping.AddLayer(df, addLayer, "AUTO_ARRANGE") arcpy.RefreshActiveView() arcpy.RefreshTOC() del mxd, df, addLayer However, my raw data is always going be shape files, so I need to be able to open them. (Equivantly: convert a shape file to a layer file wiothout opening it, but I'd prefer not to do that).

    Read the article

  • Making files generally available on Linux system (when security is relatively unimportant)?

    - by Ole Thomsen Buus
    Hi, I am using Ubuntu 9.10 on a stationary PC. I have a secondary 1 TB harddrive with a single big logical partition (currently formatted as ext4). It is mounted as /usr3 with options user, exec in /etc/fstab. I am doing highspeed imaging experiments. Well, only 260fps, but that still creates many individual files since each frames is saved as one png-file. The stationary is not used by anyone other than me which is why the default security model posed by ubuntu is not necessary. What is the best way to make the entire contents of /usr3 generally available on all systems. In case I need to move the harddrive to another Ubuntu 9.x or 10.x machine? When grabbing image with the firewire camera I use a selfmade grabbing software-utility (console based) in sudo-mode. This creates all files with root as owner and group. I am logged in as user otb and usually I do the following when having to make files generally available to otb: sudo chown otb -R * sudo chgrp otb -R * sudo chmod a=rwx -R * This takes some time since the disk now contains individual ~200000 files. After this, how would linux behave if I moved the harddrive to another system where the user otb is also available? Would the files still be accessible without sudo use?

    Read the article

  • PHP Apache XAMPP Run Multiple Scripts from CLI in Background

    - by Pamela
    How can I simultaneously run dozens of PHP scripts in the background from XAMPP's command line interface? Someone suggested a batch file, but when I tried executing this: start php 1.php start php 2.php start php 3.php It only opened a command prompt window; I closed that window, then two more command prompt windows opened up executing 2.php and 3.php. I want to run as many scripts as I want all simultaneously and all in the background. What is the best way to accomplish this, and how can it be done?

    Read the article

  • Problems with Maverick upgrade

    - by altenuta
    I upgraded to Maverick 10.10 from Lucid. I have an old Toshiba Satellite with a 1.1 MHz and 256MB RAM. Initially I couldn't get my wireless to work. That solved itself after installing various updates and programs. The problems that remain are: I have to authorize at least 2 times at start-up. This machine is Ubuntu only. No boot load screen. I have a ton of programs and system directories that are in my home folder. Is this normal? It is difficult to wake the computer from sleep. Usually I just shut it down and restart. Tonight I waited and got a message about corrupt memory. The computer takes forever to do just about everything. Slow to start programs or doing things on the web. I am a longtime Mac user (since 1986). I also manage a network of several windoze machines. I am definitely a GUI guy and do very little in the terminal so I really need to know where to begin to get things straightened out. Can I rescue this machine without wiping it and doing a fresh install? This is basically a hobby machine. Aside from all the programs and upgrades I've installed, I have almost no files or documents to worry about saving. Anyone have any ideas about the problems I'm having and the best way to proceed? Thanks, Al

    Read the article

  • Create Virtual Image of Laptop before Formatting

    - by Simon Mark Smith
    I have a 3 year old laptop running Windows XP that I used for business. Although I have not used the laptop in over a year, I now want to re-commission it with Windows 7 and a fresh install. Before I do the fresh install I want to create a Virtual Image of the laptop that I can keep and potentially run on my desktop machine should I ever need to access any of the old files/projects that it contains currently. I know that most people will say just copy the files over to your desktop, but my concern is the configuration of the laptop. I used to use it for development and it has older versions of Visual Studio, SQL Server, Active X controls etc, etc than I currently use so I really want to preserve the environment not just the files. So really I am asking what is the best tool-set/method to achieve this? I understand there are free VM tools available but I have never done this before and would appreciate any help.

    Read the article

  • ClearTrace Supports Statement Level Events

    - by Bill Graziano
    One of the requests I get on a regular basis is to capture the performance of statement level events.  The latest beta has this feature available.  If you’re interested in this I’d like to get some feedback. I handle the SP:StmtCompleted and the SQL:StmtCompleted events.  These report CPU, reads, writes and duration. I’m not in any way saying it’s a good idea to trace these events.  Use with caution as this can make your traces much larger. If there are statement level events in the trace file they will be processed.  However the query screen displays batch level *OR* statement level events.  If it did both we’d be double counting. I don’t have very many traces with statement completed events in them.  That means I only did limited testing of how it parses these events.  It seems to work well so far though.  Your feedback is appreciated. If you ever write loops or cursors in stored procedures you’re going to get huge trace files.  Be warned. I also fixed an annoying bug where ClearTrace would fail and tell you a value had already been added.  This is a result of the collection I use being case-sensitive and SQL Server not being case-sensitive.  I thought I had properly coded around that but finally realized I hadn’t.  It should be fixed now. If you have any questions or problems the ClearTrace support forum is the best place for those.

    Read the article

  • GWB | Comment Spam On The Rise

    - by Geekswithblogs Administrator
    I don’t know a member on Geekswithblogs.net that is not frustrated with the amount of spam they get. It is a major problem that we have been dealing with for 6+ years and trying to come up with new ways to fight. As spammers get smarter, we have to continue to upgrade the tools we use to combat it. Just like any spam filter, sometimes good comments will get caught up. This has been a huge concern for some bloggers causing us to tame what we call spam and not spam. So this post is here just to state we know the spam problem is like a wave, sometimes it is not so bad, other times it gets worse. Right now it is worse. One measure we will take is a requirement for CAPTCHA soon if it continues since most members don’t clean up their spam via the admin tools (which are not the best tools, I know). Also I want to solicit a better approach from the members, what would you like the spam interface on GWB to be like? Be realistic cause we all want “Zero Spam, Good Comment live”. Related Tags: Geekswithblogs.net, Spam

    Read the article

  • Rope Colliding with a Rectangle

    - by Colton
    I have my rope, and I have my rectangles. The rope is similar to the implementation found here: http://nehe.gamedev.net/tutorial/rope_physics/17006/ Now, I want to make the rope properly collide with the rectangle such that the rope will not pass through a rectangle, and wrap around the rectangle and all that good stuff. Currently, I have it set so no rope node can pass through a rect (successfully), however, this means a rope segment can still pass through a block. Ex: So the question is, what can I do to fix this? What I have tried: I create a rectangle between two nodes of a rope, calculate rotation between the nodes, and get myself a transformed rectangle. I can successfully detect a collision between rope segments and a (non-transformed) rectangle. Create a new node or pivot point around the corner of the block, and rearrange nodes to point to the corner node. Trouble is determining what corner the rope segment is passing through. And then the current rope setup goes wonky (based on verlet integration, so a sudden change in position causes the rope to wiggle like a seismograph during a magnitude 8 earth quake.) Among other issues that might be solvable, but its turning into a case by case thing, which doesn't seem right. I think the best answer here would just be a link to a tutorial (I simply can't find any, most lead to box2D or farseer, but I want to at least learn how it works before I hide behind an engine).

    Read the article

  • PHP - Making CMS (architecture, etc.)

    - by UnknownProgramer
    I'm in the stage of planning new CMS. Before I used WordPress and other open source CMS for my clients, but I always had to write new modules and even mess with the code in order to do certain things. Which as you understand is not the best thing to do. So I finally decided to make my own CMS to work with, the way I need. But before I start it, I would like to think it trough carefully to ensure that I won't need to rewrite it ground up, just because I forgot to include some feature into architecture or did it wrong. I would like to hear your thoughs and the most important I would like you to suggest me some articles or books on that subject, especially on architecture of such systems. I googled a few good books, but that is not enough. The way I'm planning to do it: PHP5, completely OOP, modules architecture. You make a page and add any modules you need there, but modules are not global, but local to a page so you can make two pages with the same module, but content will be different if you set different "content ID" for these two entities. But it can be set the same, so two pages has the same content of the modules put there. Also I plan to support online storage web service (like amazon S3) for images and files, so I would like to hear your thoughs on it too. Also I have not yet decided how to store language data. I don't want to use DB for that, but I haven't decided yet. Also I think I will support other DB with global DB class and separate DB wrappers for MySQL and other databases. And, well, I would appreciate any other information you can provide for that subject.

    Read the article

  • Heading Out to Oracle Open World

    - by rickramsey
    In case you haven't figured it out by now, Oracle reserves an awful lot of announcements for Oracle Open World. As a result, the show is always a lot of fun for geeks. What will the Oracle Solaris team have to say? Will the Oracle Linux team have any surprises? And what about Oracle hardware? For my part, I'll be one of the lizards at the OTN Lounge with the OTN crew, handing out t-shirts to system admins and developers, or anyone who is willing to impersonate one. I understand, not everyone can have the raw animal magnetism of a sysadmin, or the debonair sophistication of a C++ developer, so some of you have no choice but to pretend. I won't judge. I'll also be doing video interviews of as many techie people as I can corner. I've got more than 30 interviews already scheduled. Most of them will be 3-5 minutes long. I'll be asking our best technical minds what's cool about their latest technologies and what impact it will have on system admins or system developers. I'll be posting those videos here: Find OTN Systems Videos from Oracle Open World Here! We've got some great topics in mind. A dummies guide to hardware-assisted cryptography with Glenn Brunette. ZFS deduplication. The momentum building around Oracle Solaris 11, with Lynn Rohrer, plus conversations with partners who have deployed Oracle Solaris 11. Migrating to Oracle Database with SQL Developer. The whole database cloud thing. Oracle VM and, of course, Oracle Linux. So even if you can't be part of the fun, keep an eye out for the videos on our YouTube channel. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Choosing Technology To Include In Software Design

    How many of us have been forced to select one technology over another when designing a new system? What factors do we and should we consider? How can we ensure the correct business decision is made? When faced with this type of decision it is important to gather as much information possible regarding each technology being considered as well as the project itself. Additionally, I tend to delay my decision about the technology until it is ultimately necessary to be made. The reason why I tend to delay such an important design decision is due to the fact that as the project progresses requirements and other factors can alter a decision for selecting the best technology for a project. Important factors to consider when making technology decisions: Time to Implement and Maintain Total Cost of Technology (including Implementation and maintenance) Adaptability of Technology Implementation Team’s Skill Sets Complexity of Technology (including Implementation and maintenance) orecasted Return On Investment (ROI) Forecasted Profit on Investment (POI) Of the factors to consider the ROI and POI weigh the heaviest because the take in to consideration the other factors when calculating the profitability and return on investments.For a real world example let us consider developing a web based lead management system for a new company. This system can either be hosted on Microsoft Windows based web server or on a Linux based web server. Important Factors for this Example Implementation Team’s Skill Sets Member 1  Skill Set: Classic ASP, ASP.Net, and MS SQL Server Experience: 10 years Member 2  Skill Set: PHP, MySQL, Photoshop and MS SQL Server Experience: 3 years Member 3  Skill Set: C++, VB6, ASP.Net, and MS SQL Server Experience: 12 years Total Cost of Technology (including Implementation and maintenance) Linux Initial Year: $5,000 (Random Value) Additional Years: $3,000 (Random Value) Windows Initial Year: $10,000 (Random Value) Additional Years: $3,000 (Random Value) Complexity of Technology Linux Large Learning Curve with user driven documentation Estimated learning cost: $30,000 Windows Minimal based on Teams skills with Microsoft based documentation Estimated learning cost: $5,000 ROI Linux Total Cost Initial Total Cost: $35,000 Additional Cost $3,000 per year Windows Total Cost Initial Total Cost: $15,000 Additional Cost $3,000 per year Based on the hypothetical numbers it would make more sense to select windows based web server because the initial investment of the technology is much lower initially compared to the Linux based web server.

    Read the article

  • Web Seminar - The Oracle Database Appliance: How to Sell a Unique Product!

    - by swalker
    Dear partner, You are exclusively invited to join us for a webcast, dedicated to Oracle’s EMEA Partners, on the Oracle Database Appliance value proposition, positioning and ecosystem – to help you capture new business and help your customers roll out their solutions fast, easily, safely and with maximum cost efficiency! Join us to learn about: ODA Benefits: Fast, Easy, Cost Efficient, Highly Reliable Feedback from early Customer Wins: What can we Learn? Objection Handling: Overcoming the most common customer questions Going beyond the Database: The ODA ECO System for applications, backup & more… When combined with your high-value services (e.g., migration, consolidation), the end result is a database system that you can use to grow the business in your existing accounts, or capture new business. Join us at the EMEA partner webcast hosted by Robert Van Espelo Cloud and Virtualization Leader, EMEA Business Development on Thursday, April 12, at 9:00am UK / 10:00am CET. The presentation will be given in English. To register for this webcast click here We look forward to talking to you on April 12! Best regards,Giuseppe Facchetti EMEA Partner Business Development Manager Oracle EMEA, Hardware Sales Paul LeonardEMEA Partner Marketing Manager Oracle EMEA, Systems Marketing

    Read the article

  • CentOS Default ACLs on Existing File System Objects

    - by macinjosh
    Is there a way to have existing file system objects inherit newly set default ACL settings of their parent directories? The reason I need to do this is that I have an user who connect via SFTP to my server. They are able to change directories in their FTP client and see the root folder and the rest of the server. They don't have permissions to change or edit anything but their own user directory but I would like to prevent them from even view the contents of other directories. Is there a better way to do this than ACLs? If ACLs are the way to go I'm assuming a default ACL on the root directory would be the best way to do restrict access. I could then selectively give the user permission to view certain directories. The problem is default ACLs are only inherited by new file system objects and not existing ones.

    Read the article

< Previous Page | 780 781 782 783 784 785 786 787 788 789 790 791  | Next Page >