Search Results

Search found 20281 results on 812 pages for 'software engineer'.

Page 529/812 | < Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >

  • User Permissions: Daemon and User

    - by Eddie Parker
    Hello: I often run into this issue on Linux, and I'd love to know the proper way of solving it. Say I have a daemon running. In my example, I'll use LigHTTPD, a webserver. Some software, like Wordpress, enjoys having read/write access to files for updating applications via a web interface, which I think is quite handy. At the same time, I enjoy being able to hack on my files using vim, using my local user account, 'eddie'. Herein lies the rub. Either I chown everything to lighttpd or eddie and a shared group between them both, and chmod it 660, or perpetually sudo to edit the damned things. The former isn't a bad solution, until I create a new file in which case I have to remember to chmod it appropriately, or create some hack like a cron job that chmods for me. Is there an easier way of doing this? Have I overlooked something? Cheers, -e-

    Read the article

  • Cannot start tor with vidalia, failed to bind listening port because of tor-socks running

    - by ganjan
    I get these errors trying to run tor with vidalia Apr 19 21:55:15.371 [Notice] Tor v0.2.1.30. This is experimental software. Do not rely on it for strong anonymity. (Running on Linux i686) Apr 19 21:55:15.372 [Notice] Initialized libevent version 1.4.13-stable using method epoll. Good. Apr 19 21:55:15.373 [Notice] Opening Socks listener on 127.0.0.1:9050 Apr 19 21:55:15.373 [Warning] Could not bind to 127.0.0.1:9050: Address already in use. Is Tor already running? Apr 19 21:55:15.373 [Warning] Failed to parse/validate config: Failed to bind one of the listener ports. Apr 19 21:55:15.373 [Error] Reading config failed--see warnings above. I don't think tor is running. Here is a nmap scan of my localhost Starting Nmap 5.21 ( http://nmap.org ) at 2011-04-19 21:59 CEST Nmap scan report for localhost (127.0.0.1) Host is up (0.0000050s latency). Hostname localhost resolves to 2 IPs. Only scanned 127.0.0.1 rDNS record for 127.0.0.1: localhost.localdomain Not shown: 989 closed ports PORT STATE SERVICE 22/tcp open ssh 53/tcp open domain 80/tcp open http 139/tcp open netbios-ssn 445/tcp open microsoft-ds 631/tcp open ipp 3128/tcp open squid-http 3306/tcp open mysql 9000/tcp open cslistener 9050/tcp open tor-socks 10000/tcp open snet-sensor-mgmt I see tor-socks is running here, probably be the cause of the problem. How do I stop this from starting up? I want to use vidalia so I can monitor whats going on.

    Read the article

  • Workaround to extend limited screen real-estate on Windows?

    - by Brian
    I need a means to use a software tool that requires at least 900 pixels of vertical resolution (as in, the "OK" button to save settings won't be reachable on smaller displays) on a laptop/projector with only 768 pixels of vertical resolution for a training session. So far the only workaround that's been suggested is to memorize the number of tab stops to reach the "OK" and "Cancel" buttons. Any suggestions on a better workaround? What I'd like to see is a utility that would let me treat the physical display as a 1024x768 view port into a larger, virtual display area. Does anything like that exist? Anything else that might help?

    Read the article

  • Transfer used ram to swap and free more physical ram

    - by AbuOmar
    I have a vps with 256m physical ram and 512m swap space, I am trying to use a software that need more than 256m and less than 512m ram, so it uses swap. The problem is at some moment of the installation, the installation process checks the available physical ram, it's full (some swap is used), so it pauses the installation and tills that it must be some more ram to continue the installation.. I see that I need to move some of used physical ram to swap while the process is running.. Is there anyway I can do that? or any other solution! For sorry I am using openvz vps, and the vm.swappiness option can't be modified as a solution.

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Windows Server Configuration with Exchange, SQL Express and IIS

    - by Reafidy
    In our small office we are currently running a standalone tower server with WS 2008 R2, SQL Express and IIS. This server is going to be decommissioned and scrapped as its old and very noisy. We are going to purchase a new server with WS 2012 Standard and a heap of ram. It will still be a standalone server so it will be a domain controller, have SQL Express and IIS installed. We intend to install the hyper-v role and host a second virtual server to distribute the load. We are a small company and have only 15 staff members so its not a huge load on the server. Can a single server handle this type of installation, we don't want to purchase two servers. If so how should it be configured with regard to which software packages should be virtualized(if any). Redundancy is not a huge issue for us.

    Read the article

  • Windows 8 wmvcore.dll

    - by Dominykas Mostauskis
    I am running Windows 8 Pro N x64 without Windows Media Player (which might be called Windows Media Center in the case of Windows 8) preinstalled. Apparently certain software packages require WMP to be installed, in my case Premiere Pro, (wmvcore.dll required error); however, Microsoft will only be releasing it at the end of October, which is a month from now, while no other WMP packages seem to install on Windows 8. Tried downloading a Windows 7 x64 SP1 wmvcore.dll to no avail. This is a problem for many people at the moment, and a solution would be appreciated.

    Read the article

  • How to fix AVI index error

    - by Tim
    I try to open an AVI file. The first software I tried is VLC media player. It reports some error about AVI index: This AVI file is broken. Seeking will not work correctly. Do you want to try to fix it? This might take a long time. I chose yes, and it began fixing AVI index and existed when the repair progress bar reaches 20% or so. Then the video started playing and stopped much earlier than when it is supposed to finish. Next I tried to open it in Totem Movie Player, which also stopped earlier at the same place as in VLC player. I tried to play it in GMplayer. Now the entire AVI file can be played from start to finish, but it is impossible to drag playing progress bar while it was possible in VLC player and Totem player. I heard that Avidemux can fix AVI index error, but later discovered it failed to open the AVI file. So I was wondering how I can fix the AVI index error, or at least drag the playing progress bar in GMplayer? Thanks and regards!

    Read the article

  • Arbitrary Key Remapping on a Mac

    - by Mason
    I bought a cheap Chinese replacement keyboard for my late 2007 MBP. The close square/curly bracket key actually sends a left control signal to the Mac. So I'm trying to remap my backslash/pipe key to be close square/curly bracket but I can't find the key remapping software to do it. Double Command and KeyRemap4Macbook can't do arbitrary key remaps and uControl/fkeys don't work on Snow Leopard. Anyone have ideas? I have no problem editing text config files if necessary.

    Read the article

  • Desktop Fun: Music Icon Packs

    - by Asian Angel
    If you really love music and want to liven up your desktop then get ready to create a desktop concert with our Music Icon Packs collection. Note: To customize the icon setup on your Windows 7 & Vista systems see our article here. Using Windows XP? We have you covered here. Sneak Preview For our desktop example we decided to go with a touch of anime musical fun. The icons used are from the Guitar Icons set shown below. Note: Wallpaper can be found here. An up close look at the icons that we used… Notes icon set 1 *.png format only Download Notes icon set 2 *.png format only Download Notes icon set 3 *.png format only Download Notes icon set 4 *.png format only Download Big Band Set 1 *.ico format only Download Big Band Set 2 *.ico format only Download Acoustic Guitars *.ico and .png format Download Acoustic Guitar *.ico format only Download Guitar Icons *.ico format only Download Guiter Skulll *.ico format only Download Dented Music *.ico format only Download Music Icons *.ico format only Download Ipod Mini *.ico format only Download MP3 Players Icons *.ico format only Download MusicPhones icon *.ico and .png format Download Wanting more great icon sets to look through? Be certain to visit our Desktop Fun section for more icon goodness! Similar Articles Productive Geek Tips Desktop Fun: Video Game Icon PacksDesktop Fun: Sci-Fi Icons Packs Series 2Why Did Windows Vista’s Music Folder Icon Turn Yellow?Restore Missing Desktop Icons in Windows 7 or VistaAdd Home Directory Icon to the Desktop in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Download Songs From MySpace Steve Jobs’ iPhone 4 Keynote Video Watch World Cup Online On These Sites Speed Up Windows With ReadyBoost Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online

    Read the article

  • Wi-Fi performance in Windows 8 RP on a MacBook Air (mid 2011)

    - by Steven Lu
    I was able to install the Boot Camp Windows software using the executable that it provided, and there are no unrecognized or unknown devices in Device Manager. Wi-Fi works but it seems to be limited to an extremely slow 1.5Mbits. Network Center reports an 802.11n connection (at 65Mbps usually) but transfers never reach above about 200kB/s. Being limited to 1/20th of the connection speed of my internet service is quite frustrating. Does anybody experience the same issue? I have been trying to identify the Broadcom Wi-Fi chipset and a driver that I could try to upgrade to but I have made very little progress on Google on this front.

    Read the article

  • Making a Case For The Command Line

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/06/30/making-a-case-for-the-command-line.aspxI have had an idea percolating in the back of my mind for over a year now that I’ve just recently started to implement. This idea relates to building out “internal tools” to ease the maintenance and on-going support of a software system. The system that I currently work on is (mostly) web-based, so we traditionally we have built these internal tools in the form of pages within the app that are only accessible by our developers and support personnel. These pages allow us to perform tasks within the system that, for one reason or another, we don’t want to let our end users perform (e.g. mass create/update/delete operations on data, flipping switches that turn paid modules of the system on or off, etc). When we try to build new tools like this we often struggle with the level of effort required to build them. Effort Required Creating a whole new page in an existing web application can be a fairly large undertaking. You need to create the page and ensure it will have a layout that is consistent with the other pages in the app. You need to decide what types of input controls need to go onto the page. You need to ensure that everything uses the same style as the rest of the site. You need to figure out what the text on the page should say. Then, when you figure out that you forgot about an input that should really be present you might have to go back and re-work the entire thing. Oh, and in addition to all of that, you still have to, you know, write the code that actually performs the task. Everything other than the code that performs the task at hand is just overhead. We don’t need a fancy date picker control in a nicely styled page for the vast majority of our internal tools. We don’t even really need a page, for that matter. We just need a way to issue a command to the application and have it, in turn, execute the code that we’ve written to accomplish a given task. All we really need is a simple console application! Plumbing Problems A former co-worker of mine, John Sonmez, always advocated the Unix philosophy for building internal tools: start with something that runs at the command line, and then build a UI on top of that if you need to. John’s idea has a lot of merit, and we tried building out some internal tools as simple Console applications. Unfortunately, this was often easier said that done. Doing a “File –> New Project” to build out a tool for a mature system can be pretty daunting because that new project is totally empty.  In our case, the web application code had a lot of of “plumbing” built in: it managed authentication and authorization, it handled database connection management for our multi-tenanted architecture, it managed all of the context that needs to follow a user around the application such as their timezone and regional/language settings. In addition, the configuration file for the web application  (a web.config in our case because this is an ASP .NET application) is large and would need to be reproduced into a similar configuration file for a Console application. While most of these problems are could be solved pretty easily with some refactoring of the codebase, building Console applications for internal tools still potentially suffers from one pretty big drawback: you’d have to execute them on a machine with network access to all of the needed resources. Obviously, our web servers can easily communicate the the database servers and can publish messages to our service bus, but the same is not true for all of our developer and support personnel workstations. We could have everyone run these tools remotely via RDP or SSH, but that’s a bit cumbersome and certainly a lot less convenient than having the tools built into the web application that is so easily accessible. Mix and Match So we need a way to build tools that are easily accessible via the web application but also don’t require the overhead of creating a user interface. This is where my idea comes into play: why not just build a command line interface into the web application? If it’s part of the web application we get all of the plumbing that comes along with that code, and we’re executing everything on the web servers which means we’ll have access to any external resources that we might need. Rather than having to incur the overhead of creating a brand new page for each tool that we want to build, we can create one new page that simply accepts a command in text form and executes it as a request on the web server. In this way, we can focus on writing the code to accomplish the task. If the tool ends up being heavily used, then (and only then) should we consider spending the time to build a better user experience around it. To be clear, I’m not trying to downplay the importance of building great user experiences into your system; we should all strive to provide the best UX possible to our end users. I’m only advocating this sort of bare-bones interface for internal consumption by the technical staff that builds and supports the software. This command line interface should be the “back end” to a highly polished and eye-pleasing public face. Implementation As I mentioned at the beginning of this post, this is an idea that I’ve had for awhile but have only recently started building out. I’ve outlined some general guidelines and design goals for this effort as follows: Text in, text out: In the interest of keeping things as simple as possible, I want this interface to be purely text-based. Users will submit commands as plain text, and the application will provide responses in plain text. Obviously this text will be “wrapped” within the context of HTTP requests and responses, but I don’t want to have to think about HTML or CSS when taking input from the user or displaying responses back to the user. Task-oriented code only: After building the initial “harness” for this interface, the only code that should need to be written to create a new internal tool should be code that is expressly needed to accomplish the task that the tool is intended to support. If we want to encourage and enable ourselves to build good tooling, we need to lower the barriers to entry as much as possible. Built-in documentation: One of the great things about most command line utilities is the ‘help’ switch that provides usage guidelines and details about the arguments that the utility accepts. Our web-based command line utility should allow us to build the documentation for these tools directly into the code of the tools themselves. I finally started trying to implement this idea when I heard about a fantastic open-source library called CLAP (Command Line Auto Parser) that lets me meet the guidelines outlined above. CLAP lets you define classes with public methods that can be easily invoked from the command line. Here’s a quick example of the code that would be needed to create a new tool to do something within your system: 1: public class CustomerTools 2: { 3: [Verb] 4: public void UpdateName(int customerId, string firstName, string lastName) 5: { 6: //invoke internal services/domain objects/hwatever to perform update 7: } 8: } This is just a regular class with a single public method (though you could have as many methods as you want). The method is decorated with the ‘Verb’ attribute that tells the CLAP library that it is a method that can be invoked from the command line. Here is how you would invoke that code: Parser.Run(args, new CustomerTools()); Note that ‘args’ is just a string[] that would normally be passed passed in from the static Main method of a Console application. Also, CLAP allows you to pass in multiple classes that define [Verb] methods so you can opt to organize the code that CLAP will invoke in any way that you like. You can invoke this code from a command line application like this: SomeExe UpdateName -customerId:123 -firstName:Jesse -lastName:Taber ‘SomeExe’ in this example just represents the name of .exe that is would be created from our Console application. CLAP then interprets the arguments passed in order to find the method that should be invoked and automatically parses out the parameters that need to be passed in. After a quick spike, I’ve found that invoking the ‘Parser’ class can be done from within the context of a web application just as easily as it can from within the ‘Main’ method entry point of a Console application. There are, however, a few sticking points that I’m working around: Splitting arguments into the ‘args’ array like the command line: When you invoke a standard .NET console application you get the arguments that were passed in by the user split into a handy array (this is the ‘args’ parameter referenced above). Generally speaking they get split by whitespace, but it’s also clever enough to handle things like ignoring whitespace in a phrase that is surrounded by quotes. We’ll need to re-create this logic within our web application so that we can give the ‘args’ value to CLAP just like a console application would. Providing a response to the user: If you were writing a console application, you might just use Console.WriteLine to provide responses to the user as to the progress and eventual outcome of the command. We can’t use Console.WriteLine within a web application, so I’ll need to find another way to provide feedback to the user. Preferably this approach would allow me to use the same handler classes from both a Console application and a web application, so some kind of strategy pattern will likely emerge from this effort. Submitting files: Often an internal tool needs to support doing some kind of operation in bulk, and the easiest way to submit the data needed to support the bulk operation is in a file. Getting the file uploaded and available to the CLAP handler classes will take a little bit of effort. Mimicking the console experience: This isn’t really a requirement so much as a “nice to have”. To start out, the command-line interface in the web application will probably be a single ‘textarea’ control with a button to submit the contents to a handler that will pass it along to CLAP to be parsed and run. I think it would be interesting to use some javascript and CSS trickery to change that page into something with more of a “shell” interface look and feel. I’ll be blogging more about this effort in the future and will include some code snippets (or maybe even a full blown example app) as I progress. I also think that I’ll probably end up either submitting some pull requests to the CLAP project or possibly forking/wrapping it into a more web-friendly package and open sourcing that.

    Read the article

  • New WebLogic Server 12.1.2 Installation and Patching Technology By Monica Riccelli

    - by JuergenKress
    WebLogic Server 12.1.2 has many new features, but the first new feature you are likely to notice is the change in installer technology. WebLogic Server and Coherence 12.1.2 are installed using Oracle Universal Installer (OUI) installer technology. We have also changed WebLogic Server patching technology from SmartUpdate to OPatch, the patching tool used to patch OUI installations. Note that installation and patching technology used for prior versions of WebLogic Server has not changed. The primary motivation for this change is to provide consistency across the Oracle stack. Prior to WebLogic Server 12.1.2, Fusion Middleware customers were required to use different technologies to install and patch, for example, Oracle Application Development Framework (ADF) with WebLogic Server. Now users can perform installation and patching across products more efficiently by using the same technologies, and by using new installation packages that simplify installation of Fusion Middleware products with WebLogic Server. Check the YouTube video that describes how to install  WebLogic Server 12.1.2 using the  OUI installer. The following WebLogic Server distributions are now available on the Oracle Technology Network (OTN)  under OTN license, and from Oracle Software Delivery Cloud (OSDC) for licensed customers: wls_121200.jar - This OUI installer package includes WebLogic Server and Coherence and is targeted at WebLogic Server users who do not require other Fusion Middleware components such as ADF. This generic installer can be used to install WebLogic Server and Coherence on any supported operating system, and is intended for development or production purposes. This is available on OTN and OSDC. Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Monica Riccelli,WebLogic 12c,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Moving from Test Automation to Development

    - by avgvstvs
    I'm in an interesting quandary. I've been doing test automation using QTP for about 1.5 years, and am in the slow process of switching to a developer role in my same company. I also begin my Master's in CS this fall. An old friend is trying to recruit me for a Sr. Test Automation position that could potentially pay me $23k more for the exact same thing I do now. But obviously I would defer moving to development. The new company is much more technical overall (I would be moving from financial services to industrial automation, and they have MANY more software dev roles available. I know traditionally QA type jobs carry an odd "danger" tag, but test automation is really a different beast. Does anyone have any experience moving from test automation to development? Does the QA stigma exist? The extra $$ would be nice, but not at the expense of my career. I should note that my Master's will be on Systems/parallel programming, so one thought is that I'll get automatic consideraton for development upon completing my Master's. I also work 6hrs/wk doing game development with a friend.

    Read the article

  • Home ZFS based NAS...What processor/chipset to use?

    - by MrBlargityBlarg
    So, I'm building a home/personal NAS. My plan is to expose both SMB fileshares for sharing files/media between hosts, but also to carve an iSCSI target LUN out of it for use by VMWare as a datastore. I want to use ZFS (software RAID) so that means I'll either be using FreeNAS, Solaris Express, or OpenIndiana. My question is basically: How much horsepower do I need? Obviously I/O is going to be my bottleneck but I want to be sure that I am not limiting my I/O because of a slow processor or chipset. So far the hardware plan is to use an Intel i3 and motherboard with one of the H87, Q87, or Z87 chipsets, a SAS controller (JBOD, no RAID) and if budget allows, I'm also hoping to get an SSD for the ZFS L2ARC and ZIL. Does anyone think I could get away with an Intel Atom or cheaper/less-capable processor/chipset than the i3 and [HQZ]87 listed above?

    Read the article

  • Create a Shortcut To Group Policy Editor in Windows 7

    - by Mysticgeek
    If you’re a system administrator and find yourself making changes in Group Policy Editor, you might want to make a shortcut to it. Here we look at creating a shortcut, pinning it to the Taskbar, and adding it to Control Panel. Note: Local Group Policy Editor is not available in Home versions of Windows 7. Typing gpedit.msc into the search box in the Start menu to access Group Policy Editor can get old fast. To create a shortcut, right-click on the desktop and select New \ Shortcut. Next type or copy the following path into the location field and click Next. c:\windows\system32\gpedit.msc Then give your shortcut a name…something like Group Policy, or whatever you want it to be and click Finish. Now you have your Group Policy shortcut… If you want it on the Taskbar just drag it there to pin it. And that’s all there is to it!   If you want to change the icon, you can use one of the following guides… Customize Icons in Windows 7 Change a File Type Icon in Windows 7 Add Group Policy to Control Panel If you’re using non Home versions of XP, Vista, or Windows 7, check out The Geek’s article on how to Add Group Policy Editor to Control Panel. Similar Articles Productive Geek Tips Add Group Policy Editor to Control PanelQuick Tip: Disable Search History Display in Windows 7Remove Shutdown and Restart Buttons In Windows 7How To Disable Control Panel in Windows 7Allow Users To Run Only Specified Programs in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • How can you invert the colors of a PDF?

    - by legr3c
    I need to invert all the colors of a PDF document (background, text, graphics, and images). I want it persistent in the file so the inverted viewing options, that some viewers offer, won't help. Rasterizing the document and using image manipulation software is also not an option. I read somewhere that this can be done with the Enfocus PitStop plugin for Acrobat. However I didn't see a corresponding command anywhere. Am I missing something? Then I read that the ARTS PDF Crackerjack plugin for Acrobat offers negative printing so I tried that, too. The option is there but it simply doesn't work. I have been searching for very long for a way to do this. It seems like a common enough task but I just can't find out how to do it. Are there maybe any virtual printer drivers or something of the sort that support negative printing? Can anyone help?

    Read the article

  • Mirror virtualized development environment

    - by David Casillas
    I work alone in some iOS projects in a local environment. I have been thinking in a way to be able to share my development environment between my Mac Mini and my MacBook. I mostly work at home in the Mini but sometimes I need to do a demo or work outside and I would like to have the development environment mirrored in both. I have think in using a virtual machine (via VirtualBox) with just my development tools instaled. Then I could synchronize that VM with some software between both computers so I will always have the exact environment no matter what computer I use. Is there any good reason not do do this way? I have not used Virtualization to much so I have no background on the subject. My basic setup will be: Mac Mini: i7 dual Core, 8Gb. OSX Mountain Lion Host OS: MacBook: 2.4 Core 2 Duo. 4Gb. OSX Lion Host OS. Virtual Box with Mountain Lion guest OS in both machines. XCode5, Simulator.

    Read the article

  • Can I provision half a core as a virtual CPU?

    - by ramdaz
    I am virtualization newbie. Please advise on these questions. Please note using a commercial VM software like Citrix or VMware is not a choice for me. I have at my disposal a couple of 2x 4 core servers with 32 GB RAM. I need to create 16 VMs on each server to test some web applications. Can I provision half a core as a virtual CPU for each VM? To my best knowledge I can't do so on Xen. Is it possible on KVM or some other free open source VM solution? If it's not possible to assign half a core, how do I ensure that uniform processing power is available for all VMs? Since the job is to create separate instances for hosting 16 web apps in a physical server, do you recommend setting up a private cloud using Ubuntu Enterprise Cloud as a better option? Is there HA solution under KVM, like Remus for Xen?

    Read the article

  • front end to linux std mailbox for development purposes

    - by Fabio
    I am actually a software developer, do have a fair amount of linux experience as a user though since 1997. I am normally on stackoverflow.com, please excuse me if this question isn't appropriate here. I am working on a web project. We send out emails. I work locally on a linux box. When coding I use my local mailboxes to check what's been sent. Emails sent out to valid email addresses are not arriving at my official mailbox; they might be stopped by the provider's mail servers (gmail, yahoo). Now, we are sending out HTML mails too. I need to check how they look like. Is there a GUI frontend to the standard linux BSD mailbox? Or should I install some IMAP/POP server for this? Will such server get the emails sent to username@localhost ? Thanks for any suggestion

    Read the article

  • links for 2010-05-04

    - by Bob Rhubart
    IdMapper: A Java Application for ID Mapping across Multiple Cross-Referencing Providers H/T to Geertjan for posting a link to this paper on a Netbeans-based project. (tags: java netbeans) Mastering Your Multicore System - Oracle Solaris Video How Sun Studio compilers and tools can simplify these challenges and enable you to fully unlock the potential in multicore architecture. Don Kretsch presents at Tech Days, Brazil, 2009. (tags: oracle sun sunstudio multicore video) Allison Dixon: COLLABORATE: OAUG Staff #c10 ORACLENERD guest blogger Allison Dixon offers a peek behind the curtain and a tip of the hat to the people behind Collaborate 10. (tags: oracle oaug ioug collaborate2010) @myfear: Java EE 5 or 6 - which to choose today Author, software architect, and Oracle ACE Director Markus Eisele shares his insight into the choice between Java EE versions. (tags: oracle otn java oracleace glassfish) @blueadept61: Architecture and Agility #entarch In yet another great, succinct post, Oracle ACE Director Mike Van Alst offers more quotable wisdom than I can share here. Read the whole thing. (tags: oracle otn entarch enterprisearchitecture agile) @blueadept61: Governance Causes SOA Projects to Fail? Oracle ACE Director Mike Van Alst's short but thought-provoking post raises issues of language and perception in dealing with the cultural hurdles to SOA Governance. (tags: oracle otn soa soagovernance communication) Anthony Shorten: List of available whitepapers as of 04 May 2010 Anthony Shorten shares a list of whitepapers available from My Oracle Support covering Oracle Utilities Application Framework based products. (tags: oracle otn whitepapers frameworks documentation) @processautomate: SOA Governance is Not a Documentation Exercise Leonardo Consulting SOA specialist Mervin Chiang proposes that simply considering and applying basic SOA governance -- service management -- can go a long way. (tags: otn oracle soa soagovernance) Article: Cloud Computing Capability Reference Model This Cloud Computing Capability Reference Model provides a functional view of the layers in a typical cloud stack to help Enterprise Architects identify the components necessary to implement Cloud solutions. (tags: oracle otn cloud entarch soa virtualization)

    Read the article

  • Connecting to Aerohive APs from Laptops running Win. 7 using authentication from a Windows 2008 domain server

    - by user264116
    I have deployed a wireless network using Aerohive access points. 2 of them are set up as radius servers. I want my users to be able to use the same user name and password they use when they log onto our domain. They are able to do this from android devices or computers running Windows 8. It will not work on Windows 7 machines. How do I remedy this situation, keeping in mind that the machines are personal machines not company owned and I will have no way to change their hardware or software.

    Read the article

  • WebCenter .NET Accelerator - Microsoft SharePoint Data via WSRP

    - by john.brunswick
    Platforms in the enterprise will never be homogeneous. As much as any vendor would enjoy having their single development or application technology be exclusively adopted by customers, too much legacy, time, education, innovation and vertical business needs exist to make using a single platform practical. JAVA and .NET are the two industry application platform heavyweights and more often than not, business users are leveraging various systems in their day to day activities that incorporate applications developed on top of both platforms. BEA Systems acquired Plumtree Software to complete their "liquid" view of data, stressing that regardless of a particular source system heterogeneous data could interoperate at not only through layers that allowed for data aggregation, but also at the "glass" or UI layer. The technical components that allowed the integration at the glass thrive today at Oracle, helping WebCenter to provide a rich composite application framework. Oracle Ensemble and the Oracle .NET Application Accelerator allow WebCenter to consume and interact with the UI layers provided by .NET applications and a series of other technologies. The beauty of the .NET accelerator is that it can consume any .NET application and act as a Web Services for Remote Portlets (WSRP) producer. I recently had a chance to leverage the .NET accelerator to expose a ASP .NET 2.0 (C#) application in the WebCenter UI (pictured above) and wanted to share a few tips to help others get started with similar integrations. I was using two virtual machines for the exercise - one with Windows Server 2003, running SharePoint and the other running WebCenter Spaces 11g. For my sample application data I ended up using SharePoint 2007 lists and calendars (MOSS 2007) to supply results using a .NET API for SharePoint.

    Read the article

  • 12.04: Persistent Gimp 2.2 from Gimphoto or Gimpshop. Cannot install 2.8

    - by Jorge M. Treviño
    I have a very messed up installation of Gimp. Some time ago I installed Gimpshop or Gimphoto (can't really remember which) and it installed Gimp 2.2. Didn't work for me and tried to remove it. I've followed all Gimp remove, autoremove, clean, update and upgrade instructions I've found here to no avail. Now Software Center doesn't even show 2.8 but instead it does 2.6. I've installed and removed Gimp from terminal nth times. Running Gimp from the dash doesn't do anything but entering "Gimp" in the terminal prompt gives me a 2.2 installation screen. Cannot for the life of me find and remove the darn leftover garbage. How can I completely clean my system (12.04, fully updated today) from everything Gimp so I can give 2.8 a try? Anticipated apologies if this is a dupe but I've run through all messages with a Gimp tag and none has helped me. sudo apt-get install -f gives me 0,0,0,0. Thanks in advance.

    Read the article

  • Need suggetion on installing windows server on windows 7 laptop

    - by Kumar
    I recently bought one laptop with windows 7 for software development. unfortunately windows 7 home basic comes with limited version of IIS which is not sufficient for development. I would like to have windows server 2008 R2 for server development. i don't want to format windows 7 and install windows sever as i got win 7 with laptop. Could any one please suggest me the best possible option of having windows server 2008 on my laptop without formatting windows 7. Any solution should not void warranty of my laptop. Regards, Kumar.

    Read the article

< Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >