Search Results

Search found 13853 results on 555 pages for 'soa security'.

Page 335/555 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • XAMPP: Deamon is already running, but it's NOT apache

    - by TedvG
    This one is giving me a headache... I have installed XAMPP for Linux 1.7.7 on Ubuntu 12.10. I haven't installed the latest version because of the new security "feature" which makes XAMPP so secure I can't get it running... But that's another story. After it installed and ran ok for a couple of months, I now get the famous "XAMPP: Another web server daemon is already running." error while starting XAMPP. Now I've googled extensively and can rule out the following: There is no other Apache installation, just XAMPP There are no apache or apache 2 services running There are no services running that use port 80 (checked with netstat -an grep -w 80) I have also done a fresh install of xampp 1.7.7, but that gives me the same result. I think I have tried every solution on the first two result-pages of google and am nowhere nearer to a solution. Can anyone give me pointers on how to find the mysterious "Webdeamon" that is already running?

    Read the article

  • links for 2011-02-11

    - by Bob Rhubart
    New Versions of Whitepapers are available (The Shorten Spot) Anthony Shorten shares the details on several recently updated Updated Oracle Utilities Application Framework white papers. (tags: oracle otn whitepapers) Energy Firms Targetted for Sensitive Documents (Oracle IRM, the official blog) Numerous multinational energy companies have been targeted by hackers who have been focusing on financial documents related to oil and gas field exploration, bidding contracts, and drilling rights, as well as proprietary industrial process documents, according to a new McAfee report. (tags: oracle otn security) Get Your Workshop Hands On! New Developer Day Cities & Dates (Oracle Technology Network Blog (aka TechBlog)) Oracle Technology Network's Justin Kestelyn share information on upcoming OTN Developer days. (tags: oracle otn events)

    Read the article

  • HTG Explains: How the SmartScreen Filter Works in Windows 8

    - by Chris Hoffman
    Windows 8 includes a SmartScreen filter that prevents unknown and malicious programs from running. SmartScreen is part of Internet Explorer 8 and 9 – with Windows 8, it’s now integrated into the operating system. SmartScreen is a useful security feature that will help prevent bad applications from running, but it may occasionally prevent a legitimate application from running. SmartScreen reports some information to Microsoft, so it may have some privacy implications. HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks

    Read the article

  • Solaris Day in NY and Boston

    - by unixman
    Hey all, -- We're hosting yet another Solaris event in New York -- this one will be on November 29th and focused on some key in-depth technologies in Solaris 11, which had just been released earlier this month.  Speakers include Dave Miner, Glenn Brunette and Jeff Victor.  It starts in the morning and goes through lunch; check out the agenda from the below link. Topics include: new and improved installation and package management experience, virtualization, ZFS and security.Please check it out and come join us! The RSVP link is belowhttp://www.oracle.com/go/?&Src=7239490&Act=34&pcode=NAFM10128512MPP016 Additionally, if you are in the Boston area, an identical event will be held in Burlington the following day, on November 30th. The RSVP link for that is http://www.oracle.com/us/dm/h2fy11/21285-nafm10128512mpp013-oem-525338.html Hope to see you there!

    Read the article

  • Architecture for subscription based application

    - by John
    This is about the architecture of my application I think. I have a Rails application where companies can administrate all things related to clients. Companies can buy a subscription and their users can access the application online. Hopefully I will get multiple companies subscribing to my appplication/service. Thing is, what should I do with my code and database? Seperate app code base and database per company One app code base but seperate database per company One app code base and one database The decision I am to make involves security (e.g. user from company X should not see any data from company Y) performance (let's suppose it becomes successful, it should have a good performance) and scalability (again, if successful, it should have a good performance but also easy for me to handle all the companies, code changes, etc) For sake of maintainability, I tend to opt for the one code base. For the database I really don't know at this moment. So what do you think is the best option?

    Read the article

  • Meet This Year's Most Impressive WebCenter Customer Projects

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Oracle Fusion Middleware: Meet This Year's Most Impressive Customer Projects Oracle OpenWorld Session – Tuesday Oct. 2, 2012: Moscone West, Room 3001 at 11:45AM This year – the Oracle Excellence awards had an amazing number of nominations. Each group at Oracle had a challenge to select the most innovative and game-changing nominations for their winners. The Fusion Middleware Innovation Awards, jointly sponsored by Oracle, OAUG, QUEST, ODTUG, IOUG, AUSOUG and UKOUG, honor organizations using Oracle Fusion Middleware to deliver unique business value.  This year, the awards will recognize customers across eight distinct categories: Oracle Exalogic Cloud Application Foundation Service Integration (SOA) and BPM WebCenter Identity Management Data Integration Application Development Framework and Fusion Development Business Analytics (BI, EPM and Exalytics)  The nominations included the pioneers in our customer base using these solutions in innovative ways to achieve significant business value. Tune in this afternoon for a listing of the WebCenter winners.

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Is scanning the ports considered harmful?

    - by Manoj R
    If any application is scanning the ports of other machines, to find out whether any particular service/application is running, will it be considered harmful? Is this treated as hacking? How else can one find out on which port the desired application is running (without the user input)? Let's say I only know the port range in which the other application could be running, but not the exact port. In this case, my application ping each of the port in range to check whether the other application is listening on it, using already defined protocol. Is this a normal design? Or is this considered harmful for the security?

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

  • Spreadsheet or writing an application?

    - by Lenny222
    When would you keep simple to medium-complex personal calculations in a spread sheet (Excel etc) and when would you write a small program or script for it? For example when you want to calculate what size of mortgage you can afford to buy a house. I could create a spreadsheet and have a nice tabular representation. On the other hand, if i would write a small script in a nice language (in my case Haskell), i'd have the security of a nice type system, preventing typos etc. What are the pro/cons in your opinion?

    Read the article

  • Internal and external API architecture

    - by Tacomanator
    The company I work for maintains a successful SaaS product that grew "organically" over the years. We are planning to expand the line with a suite of new products that will share data with the existing product. To support this, we are looking to consolidate business logic into a single place: a web service layer. The WS layer will be used by: The web applications A tool to import data A tool to integrate with other client software (not an API per se) We also want to create an API that can be used by our customers that are capable of using it to create their own integrations. We are struggling with the following question: Should the internal API (aka the WS layer) and the external API be one in the same, with security and permission settings to control what can be done by who, or should they be two separate applications where the external API just calls the internal API like any other application? So far in our debate it seems that separating them may be more secure, but will add overhead. What have others done in a similar situation?

    Read the article

  • National ID Cards

    Have you heard about the national ID card? While I had heard little about this program I had not given it any real attention until recently while listening to Bill O’Reilly and Lou Dobbs. Both made some great points as to how such a card could increase security and reduce fraud in the United States [...] Related posts:Your Life And The Government Choices Cloud Computing In Corporate and Home Environments ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Get an Introduction to Oracle ADF - Online Anytime

    - by shay.shmeltzer
    Last year we started recording various sessions about advanced ADF topics (binding, security, skinning etc) and published them in the new Oracle ADF Insider page on OTN.These are the type of sessions we usually deliver in the Oracle Develop or ODTUG conferences. But now you can watch them whenever you need to better understand these topics.Now we are extending the series to also cover the basics of Oracle ADF.We just published three sessions that cover an overview of Oracle ADF, an introduction to Oracle ADF Business Components, and an overview of Oracle ADF Faces.So if you are starting out and need to quickly understand what ADF is all about, or if you just want to understand what does ADF offers these might be a good starting point.Check out the ADF Insider Basics Sessions here.

    Read the article

  • How to make a great functional specification

    - by sfrj
    I am going to start a little side project very soon, but this time i want to do not just the little UML domain model and case diagrams i often do before programming, i thought about making a full functional specification. Is there anybody that has experience writing functional specifications that could recommend me what i need to add to it? How would be the best way to start preparing it? Here i will write down the topics that i think are more relevant: Purpose Functional Overview Context Diagram Critical Project Success Factors Scope (In & Out) Assumptions Actors (Data Sources, System Actors) Use Case Diagram Process Flow Diagram Activity Diagram Security Requirements Performance Requirements Special Requirements Business Rules Domain Model (Data model) Flow Scenarios (Success, alternate…) Time Schedule (Task Management) Goals System Requirements Expected Expenses What do you think about those topics? Shall i add something else? or maybe remove something?

    Read the article

  • Integrating Azure ServiceBus and SharePoint 2010

    - by Sahil Malik
    SharePoint 2010 Training: more information My new article is finally online. I had been waiting for this for a while. The thing is, AppFabric became .NET 4, and left SharePoint 2010 behind. But fear not, we have REST API. But that brings up interesting challenges of how we can integrate Azure Service Bus with SharePoint 2010 (yes 2010, not vNext – I’m not giving NDA information out you fool), the design patterns you can use, figuring out challenging issues like security, sessions, and just app design patterns instead. Well, I hope you like my next article, SharePoint Applied: Azure ServiceBus and SharePoint 2010 Enjoy! Read full article ....

    Read the article

  • Prevent APT from overwriting JCE jars

    - by Doc
    My server runs a Java application that requires I replace a few java library files with ones I downloaded on my own. This has to do with JCE security extensions and isn't really relevant to my question. I've found that these library files tend to get overwritten by apt when it later updates my java package. Is there a apt-friendly way of masking these specific files so apt won't touch them? Potential Solutions I'm considering just removing the write flag from the files, though I'm expecting this will cause apt to spew its guts everywhere when it later tries to overwrite them? Perhaps there's a java custom library directory I don't know of, where I can park my files and they'll be loaded instead of the package's defaults? The last-resort option I'm considering is writing a cron job to periodically replace the files with my versions. I hate this option.

    Read the article

  • Getting Windows Azure SDK 1.1 To Talk To A Local DB

    - by Richard Jones
    Just found this, if you’re using Azure 1.1,  which you probably will be if yo'u’ve moved to Visual Studio 2010. To change the default database to something other than sqlexpress for Development Storage do this - Look at this - http://msdn.microsoft.com/en-us/library/dd203058.aspx At the bottom it states -   Using Development Storage with SQL Server Express 2008 By default the local Windows Group BUILTIN\Administrator is not included in the SQL Server sysadmin server role on new SQL Server Express 2008 installations.  Add yourself to the sysadmin role in order to use the Development Storage Services on SQL Server Express 2008.  See SQL Server 2008 Security Changes for more information. Changing the SQL Server instance used by Development Storage By default, the Development Storage will use the SQL Express instance.  This can be changed by calling “DSInit.exe /sqlinstance:<SQL Server instance>” from the Windows Azure SDK command prompt.

    Read the article

  • What skills does a web developer need to have/learn?

    - by Victor
    I've been I've asked around, and here's what I gathered so far in no particular order: Knowledge Web server management (IIS, Apache, etc.) Shell scripting Security (E.g. ethical hacking knowledge?) Regular Expression HTML and CSS HTTP Web programming language (PHP, Ruby, etc.) SQL (command based, not GUI, since most server environment uses terminal only) Javascript and library (jQuery) Versioning (SVN, Git) Unit and functional test Tools Build tools (Ant, NAnt, Maven) Debugging tools (Firebug, Fiddler) Mastering the above makes you a good web developer. Any comments?

    Read the article

  • Moving My Blog

    - by Hirt
    Oracle has, unfortunately, moved to a new blogging platform. For security reasons, it is no longer possible to use external tools, such as Windows Live Writer. Since this makes it too time consuming for me to blog, I've decided to only use my private blog, even for work related blog entries. This is where you can find my blog entries, from now on:http://hirt.se/blog/ Note that it is hosted on the poor server in my garage, hooked up to an ADSL-modem. It will probably be dog slow. Sorry for that.

    Read the article

  • My ubuntuone is broken, what is the problem?

    - by user6962
    I am on 10.10 and u1sdtool seems to be completely broken. I reinstalled it, no change. I can add my PC to the account, but it is added twice(!) every time I tried. So I have my netbook and two times my PC in the account. The netbook with 10.04 has no problems. Below is the error msg I get when attempting to startup Ubuntu One on the command line. desktop:~$ u1sdtool --status Oops, an error ocurred: Traceback (most recent call last): Failure: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. desktop:~$ Starting it from the Me Menu has the same effect, the HDD will get really busy for a minute and then nothing happens, the client will not start. Nothing in the syslog or anywhere.

    Read the article

  • Slow Internet Performance in 12.04 LTS

    - by Mad
    Have installed Ubuntu 12.04 LTS. I have encountered the below problems. 1. Have two OS. Internet is too slow in U 12.04 compared to Windows 7 2. System Performance is very slow 3. After installing Ubuntu 12.04, my brightness is dark during the initiail time. However, I have resolved this issue and found to be working fine. 4. Unable to connect Wireless network after inputing security credentials. Please note, I am beginner to this Linux. Would Appreciate if someone could explain in step by step to overcome the above issues.

    Read the article

  • Implementing set of processes in a stored procedure or through the code?

    - by just_name
    I want to know what's the suitable method to implement the following case (best practice). If i make a set of processes like this : 1- select data from set of DB tables. 2- loop on the selected result . 3- Make some checks on each iteration . 4- Insert the result in another table . Implementing the previous steps in a stored procedure or in a transaction through my code (asp.net) . ? Concerning the performance , security and reliability issues .

    Read the article

  • An alternative way to request read reciepts

    - by lavanyadeepak
    An alternative way to request read reciepts Sometime or other we use messaging namespaces like System.Net.Mail or System.Web.Mail to send emails from our applications. When we would need to include headers to request delivery or return reciepts (often called as Message Disposition Notifications) we lock ourselves to the limitation that not all email servers/email clients can satisfy this. We can enhance this border a little now, thanks to a new innovation I discovered from Gawab. It embeds a small invisible image of 1x1 dimension and the image source reads as recieptimg.php?id=2323425324. When this image is requested by the web browser or email client, the serverside handler does a smart mapping based on the ID to indicate that the message was read. We call them as 'Web Bugs'. But wait it is not a fool proof solution since spammers misuse this technique to confirm activeness of an email address and most of the email clients suppress inline images for security reasons. I just thought anyway would share this observation for the benefit of others.

    Read the article

  • Avast Antivirus Crashes

    - by user67966
    Well I have installed avast anti virus on Ubuntu 12.04. But after updating, it crashes! So I have made some tweaks like below: 1) I pressed press Ctrl+Alt+T and opened Terminal. When it opened, I ran the command below. sudo gedit /etc/init.d/rcS 2) typed my password and hit enter 3) when the text file opens add the line: sysctl -w kernel.shmmax=128000000 4) made sure the line you added is before: exec /etc/init.d/rc S 5) This is how it should look like: Code: #! /bin/sh # rcS # # Call all S??* scripts in /etc/rcS.d/ in numerical/alphabetical order # sysctl -w kernel.shmmax=128000000 exec /etc/init.d/rc S 6) save it 7) Reboot My question is. Did I do anything wrong. I mean as I have made some tweaks,will it lower the security of avast down like viruses do! Please if you are a programmer check this if it contains bug or harmful intentions...Thanks.

    Read the article

  • mismatch of version of libkdcraw20 and libkdcraw-data

    - by naveen jankar
    I'm using Ubuntu 12.04 LTS. When I'm installlin digiKam, I'm getting a mismatch in the versions of libkdcraw20 and libkdcraw-data wanted by it and that available in the repositories. It wants version 4.8.5-0ubuntu0.2 (or in other words that is the latest version according to synaptic) but the available one is 4.8.5-0ubuntu0.3 in both cases. Is there a work-around? Or how do I request the Ubuntu managers to rectify this? addenda On Synaptic i selected digikam to be installed. it downloaded all the dependencies but the 2 in question were not found - the message it gave "W: Failed to fetch security.ubuntu.com/ubuntu/pool/main/libk/libkdcraw/… 404 Not Found [IP: 91.189.91.13 80]". I google-searched the 2 files in the repositories to find that the version available there is ubuntu0.3 instead of ubuntu0.2.

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >