Search Results

Search found 21777 results on 872 pages for 'howard may'.

Page 398/872 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • general questions about link spam

    - by hen3ry
    Hello, A CMS-based site I manage is suffering from a small but ominously growing number of almost certainly bot-emplaced, invisible spam links placed in registered-user-only shoutboxes and user forums. "Link Spam", yes? Until recently, I've kept my eyes on narrow tech issues, and I'm having trouble understanding what's going on. I understand that we need to tighten up our registration procedures, but more generally... Do I understand correctly that our primary interest in combatting link spam on our site is that major search engines reduce or zero the search visibility of sites that contain link spam? Although we're non-commercial, we don't want to be at the bottom of the rankings, or eliminated altogether. Are the linked-to sites the direct beneficiaries of the spam links, or is there some kind of indirection? What is the likely relationship between the link-spammers and the owners of the (directly or indirectly) linked-to sites? Are the owners of the linked-to sites paying the link-spammers for higher visibility? Are the owners aware that this method is being used? It is my impression that major search engines are capable these days of detecting that given sites are being promoted by link spam, and that these sites may consequently be reduced in search rank or dropped altogether. Do these sanctions occur frequently? Is there any potential value in sending notifications to the owners of the linked-to sites that their visibility is at risk? TIA, hen3ry

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • unable to upgrade to 12.10 beta 2 from 12.04 [closed]

    - by user85959
    Possible Duplicate: There's an issue with an Alpha/Beta Release of Ubuntu, what should I do? authenticate 'quantal.tar.gz' against 'quantal.tar.gz.gpg' exception from gpg: GnuPG exited non-zero, with code 2 Debug information: gpg: Signature made Fri 28 Sep 2012 03:55:55 AM IST using DSA key ID 437D05B5 gpg: /tmp/update-manager-bpIptI/trustdb.gpg: trustdb created gpg: Good signature from "Ubuntu Archive Automatic Signing Key " gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 6302 39CC 130E 1A7F D81A 27B1 4097 6EAF 437D 05B5 gpg: Signature made Fri 28 Sep 2012 03:55:55 AM IST using RSA key ID C0B21F32 gpg: Can't check signature: public key not found Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/UpdateManager/UpdateManager.py", line 1110, in on_button_dist_upgrade_clicked fetcher.run() File "/usr/lib/python2.7/dist-packages/UpdateManager/Core/DistUpgradeFetcherCore.py", line 253, in run _("Authenticating the upgrade failed. There may be a problem " File "/usr/lib/python2.7/dist-packages/UpdateManager/DistUpgradeFetcher.py", line 41, in error return error(self.window_main, summary, message) File "/usr/lib/python2.7/dist-packages/UpdateManager/Core/utils.py", line 384, in error d.window.set_functions(Gdk.FUNC_MOVE) RuntimeError: unable to get the value gpg: /tmp/tmplqoLDu/trustdb.gpg: trustdb created

    Read the article

  • Adobe Plugin in Firefox showing grey screen; something to do with range requests on Apache?

    - by Sam Minnée
    I have a web page with a link to a PDF file (target="_blank"). If I click the link, the PDF reader just shows a grey screen within the Firefox browser. If I copy that link and manually open it in a new tab, the PDF will display correctly, and subsequent requests made by clicking the original link now work, suggesting that the problem occurs when loading the file into the cache. It appears as though the Adobe PDF reader plugin is making byte-range requests (I see lots of 206 responses) and I suspect that this may be the cause of the issue. I am running an Apache webserver. Has anyone had problems with Apache and Adobe's byte-range requests? Are there any workarounds?

    Read the article

  • Inheriting projects - General Rules? [closed]

    - by pspahn
    Possible Duplicate: When is a BIG Rewrite the answer? Software rewriting alternatives Are there any actual case studies on rewrites of software success/failure rates? When should you rewrite? We're not a software company. Is a complete re-write still a bad idea? Have you ever been involved in a BIG Rewrite? This is an area of discussion I have long been curious about, but overall, I generally lack the experience to give myself an answer that I would fully trust. We've all been there, a new client shows up with a half-complete project they are looking to finish and launch. For whatever reason, they fired their previous developer, and it's now up to you to save the day. I am just finishing up a code review for a new client, and in my estimation is would be better to scrap what the previous developers built since and start from scratch. There's a ton of reasons why I am leaning toward this way, but it still makes me nervous since the client isn't going to want to hear "those last guys built you a big turd, and I can either polish it, or throw it in the trash". What are your general rules for accepting these projects? How do you determine whether it will be better to start from scratch or continue with the existing code base? What other extra steps might you take to help control client expectations, since the previous developer may have inflated those expectations beyond a reasonable level? Any other general advice?

    Read the article

  • How to handle a player's level and its consequent privileges?

    - by Songo
    I'm building a game similar to Mafia Wars where a player can do tasks for his gang and gain experience and thus advancing his level. The game is built using PHP and a Mysql database. In the game I want to limit the resources allowed to player based on his level. For example: ________| (Max gold) | (Max army size) | (Max moves) | ... Level 1 | 1000 | 100 | 10 | ... Level 2 | 1500 | 200 | 20 | ... Level 3 | 3000 | 300 | 25 | ... . . . In addition certain features of the game won't be allowed until a certain level is reached such as players under Level 10 can't trade in the game market, players under Level 20 can't create alliances,...etc. The way I have modeled it is by implementing a very loooong ACL (Access Control List) with about 100 entries (an entry for each level). However, I think there may be a simpler approach to this seeing that this feature have been implemented in many games before.

    Read the article

  • What is good book for administration & configuration of Storage logical arrays?

    - by unknown (yahoo)
    I am looking for a book which can explain pros and cons of different combination of configurations/policies of storage Arrays and may also suggest some best practices for certain scenarios for e.g. when data availability & security is very important. There are a lot of "books for dummy" but they don't go in depth, I am a more of developer so I would like to understand how and why exactly it works beneath policies & configuration settings. I am working with EMC clarion logical array but I will have to work with EMC Symmetrix or NetApp or any other types of disk arrays.

    Read the article

  • Run a script with Apache2 in a certain directory

    - by TheGatorade
    I am trying to run WebMCP on an Apache2 server. It's got 2 executable files, which I have in /opt/webmcp/cgi-bin/webmcp.lua and /opt/webmcp/cgi-bin/webmcp-wrapper.lua If I run the wrapper from a position that is not /opt/webmcp/cgi-bin it says it cannot find webmcp.lua and gives a 500 error. If I run it from the correct directory it works. My server has webmcp.lua set as directoryindex and it's giving 500 error. It may be because of this problem? /opt/webmcp/cgi-bin/ is already set as documentroot, and is accessible by www-data.

    Read the article

  • Dell V105 and 10.6

    - by James
    Hello, I am trying to get a Dell V105 printer to work with my iMac. It is running 10.6 and I have installed all the drivers on the Snow Leopard disk. I believe there is no official Dell driver but I have also heard the Dell printers are made by third parties that may sell the same printer under different names. There a quite a few Lexmark models that seem to look very similar and I have tried setting the driver to the 2600 series that looks very similar but that has not enabled me to print. Although it claims to give me the supply levels correctly. Can anyone help? Thanks for your time

    Read the article

  • ArchBeat Link-o-Rama Top 20 for March 18-24, 2012

    - by Bob Rhubart
    The top-twenty most-clicked links as shared via my social networks for the week of March 18-24, 2012. Oracle's ZFS Storage Appliance Simulator | Steen Schmidt Oracle Linux Online Forum - 4 sessions, 9 speakers + live chat March 27 OWSM vs. OEG - When to use which component - 11g | Prakash Yamuna Northeast Ohio Oracle Users Group 2 Day Seminar - May 14-15 - Cleveland, OH SOA! SOA! SOA!; OSB 11g Recipes and Author Interviews Webcast: Oracle Business Intelligence Mobile - March 27 - 10am PT / 1pm ET Oracle Hardware Systems: The Extreme Performance Tour - Dates and Locations Worldwide Oracle Cloud Conference: dates and locations worldwide Mismatch: Developer skills and customer demands | Floyd Teter OTN Virtual Developer Day - Java (APAC - in English) - March 27 Webcast Q&A: Demystifying External Authorization 2 New Cloud Computing resources added to free IT Strategies from Oracle library Encapsulating OIM API’s in a Web Service for OIM Custom SOA Composites | Alex Lopez Webcast: Simplify Oracle RAC Deployment with Oracle VM SOA gets mobilized; mobile gets SOA-ized: survey | Joe McKendrick Integrating with Oracle Fusion Applications: Discovering Integration Artifacts | Rajesh Raheja Oracle Access Manager 11g - useful links | Dmitry Nefedkin Anil Gaur on Cloud Computing Support in Java EE 7 Enterprise app shops announcements are everywhere | Andy Mulholland The extraordinary software development manager | Seth Godin Thought for the Day "Every large system that works started as a small system that worked. " — Anonymous

    Read the article

  • Language parsing to find important words

    - by Matt Huggins
    I'm looking for some input and theory on how to approach a lexical topic. Let's say I have a collection of strings, which may just be one sentence or potentially multiple sentences. I'd like to parse these strings to and rip out the most important words, perhaps with a score that denotes how likely the word is to be important. Let's look at a few examples of what I mean. Example #1: "I really want a Keurig, but I can't afford one!" This is a very basic example, just one sentence. As a human, I can easily see that "Keurig" is the most important word here. Also, "afford" is relatively important, though it's clearly not the primary point of the sentence. The word "I" appears twice, but it is not important at all since it doesn't really tell us any information. I might expect to see a hash of word/scores something like this: "Keurig" => 0.9 "afford" => 0.4 "want" => 0.2 "really" => 0.1 etc... Example #2: "Just had one of the best swimming practices of my life. Hopefully I can maintain my times come the competition. If only I had remembered to take of my non-waterproof watch." This example has multiple sentences, so there will be more important words throughout. Without repeating the point exercise from example #1, I would probably expect to see two or three really important words come out of this: "swimming" (or "swimming practice"), "competition", & "watch" (or "waterproof watch" or "non-waterproof watch" depending on how the hyphen is handled). Given a couple examples like this, how would you go about doing something similar? Are there any existing (open source) libraries or algorithms in programming that already do this?

    Read the article

  • Why are my 32bit OpenGL libraries pointing to mesa instead of nvidia, and how do I fix it?

    - by Codemonkey
    I have installed Nvidia's drivers on my Ubuntu 13 system, but according to this command (ldconfig -p | grep GL): $ ldconfig -p | grep GL libQtOpenGL.so.4 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libQtOpenGL.so.4 libGLU.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libGLU.so.1 libGLEWmx.so.1.8 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libGLEWmx.so.1.8 libGLEW.so.1.8 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libGLEW.so.1.8 libGLESv2.so.2 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/mesa-egl/libGLESv2.so.2 libGL.so.1 (libc6,x86-64) => /usr/lib/libGL.so.1 libGL.so.1 (libc6) => /usr/lib/i386-linux-gnu/mesa/libGL.so.1 libGL.so (libc6,x86-64) => /usr/lib/libGL.so libEGL.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/mesa-egl/libEGL.so.1 The 32bit version of OpenGL is pointing to mesa's libraries instead of nvidia. This causes my Steam games to refuse to launch with the error: Could not find required OpenGL entry point 'glGetError'! Either your video card is unsupported, or your OpenGL driver needs to be updated. Why is this the case? When the nvidia installer asked me if I wanted to install "32bit compatability libraries" (or something like that) I chose yes. How do I fix this? Edit: I just reinstalled the same Nvidia driver, and that apparently removed the 32bit OpenGL driver completely: $ ldconfig -p | grep libGL.so libGL.so.1 (libc6,x86-64) => /usr/lib/libGL.so.1 libGL.so (libc6,x86-64) => /usr/lib/libGL.so Now Steam won't start: You are missing the following 32-bit libraries, and Steam may not run: libGL.so.1 Again, I chose YES when the installer asked me if I wanted to install 32bit libraries. Why are they not installed!?

    Read the article

  • Duplicate pseudo terminals in linux

    - by bobtheowl2
    On a redhat box [ Red Hat Enterprise Linux AS release 4 (Nahant Update 3) ] Frequently we notice two people being assigned to the same pseudo terminal. For example: $who am i user1 pts/4 Dec 29 08:38 (localhost:13.0) user2 pts/4 Dec 29 09:43 (199.xxx.xxx.xxx) $who -m user1 pts/4 Dec 29 08:38 (localhost:13.0) user2 pts/4 Dec 29 09:43 (199.xxx.xxx.xxx) $whoami user2 This causes problems in a script because "who am i" returns two rows. I know there are differences between the two commands, and obviously we can change the script to fix the problem. But it still bothers me that two users are being returned with the same terminal. We suspect it may be related to dead sessions. Can anyone explain why two (non-unique) pts number are being assigned and/or how that can be prevented in the future?

    Read the article

  • Networking not working in Windows 7 - "The account specified for this service is different from the account specified for other services"

    - by tog22
    I have a homebuilt computer with a GA-Z68MA-D2H-B3 motherboard with a Realtek RTL8111E LAN chip. Ethernet was working fine in Windows 7 until I reinstalled this OS, but now it's stopped, while still working in other OSes. I've tried reinstalling drivers from both the Gigabyte and Realtek sites to no avail; I've also plugged in and installed an Asus USB-N13 wifi dongle and this doesn't work oddly. How can I diagnose and fix this issue? In 'Network and Sharing Center' says "The account specified for this service is different from the account specified for other services running in the same process" under the heading 'Unknown'. Following the advice at http://www.sevenforums.com/network-sharing/130159-dependency-service-group-failed-start.html#post1122627 I've gone to 'Control Panel= Admin Tools= Services' and ensured the services listed at that URL are started (one - IIRC 'Come+ Event System' refused to start as user 'Local services' so I've had to set it to log on with the 'Local system account'... I suspect this may be part of the problem).

    Read the article

  • Looking for 2D Cross platform suggestions based on requirements specified

    - by MannyG
    I am an intermediate developer with minor experience on enterprise mobile applications for iphone, android and blackberry looking to build my first ever mobile game. I did a google search for some game dev forums and this popped up so I thought I would try posting here as I lack luck elsewhere. If you have ever heard of the game for the iphone and android platform entitled avatar fight then you will have an idea of the graphic capabilities I require. Basically the battles which are automated one sprite attacking another doing cool animations but all in 2d. My buddy and I have two motivations, one is to jump into mobile Dev as my experience is limited as is his so we would like some trending knowledge (html5 would be nice to learn) . The other is to make some money on the side, don't expect much but polishing the game and putting our all will hopefully reward us a bit. We have looked into corona engine, however a lot of people are saying it is limited in the graphics department, we are open to learning new languages like lua, c++, python etc. Others we have looked at include phonegap, rhomobile, unity, and the list goes on. I really have no idea what the pros and cons of these are but for a basic battle sequence and some mini games we want to chose the right one. Some more things that we will be doing include things like card games, side scrolling flying object based games, maybe fishing stuff. We want to start small with these minigames and work our way up to the idea we would like to implement in the future. We only want to work in 2D. So with these requirements please help me chose a platform to work on (cross platform is what we are ideally leaning towards). Please feel free to throw in some pieces of advice you may have for newbie game developers like myself too. Thank you for reading!

    Read the article

  • Creating a newspaper that effects the game's economy?

    - by zardon
    I am writing a game in Objective C/cocos2d where a newspaper is a central part of what controls or rather effects the game's world economy as well as what a city might do (such as increase X, reduce Y) The newspaper is a bit like a "Chance card" in Monopoly, it has an effect on something. My question is, what is the best way to do write a newspaper that has both a random and specific effect within the game. Would the best strategy be to write out all the things a newspaper can affect, a PLIST of headlines (with placeholders). I think Tiny Tower uses a PLIST of events and it randomly picks an event, but I'm not sure how it actually parses it because certain events do different things. But then how do I parse all the scenarios that a newspaper can deliver? A big switch statement seems very long and complicated to do. I am wondering if there is a simpler way to handle this kind of thing. Related to this is that there might be no news that day and I'm not sure what the newspaper should display, should it just display the last headline? So, in summary. 1) A newspaper generates a headline, it affects different things, such as the world economy, prices, how city reacts 2) I need the newspaper to generate headlines (although there may be days when there are no headlines at all), but I am not sure how to parse it without using a big-ass switch statement. Thanks in advance.

    Read the article

  • Comparing Dates in Oracle Business Rule Decision Tables

    - by James Taylor
    I have been working with decision tables for some time but have never had a scenario where I need to compare dates. The use case was to check if a persons membership had expired. I didn't think much of it till I started to develop it. The first trap I feel into was trying to create ranges and bucket sets. The other trap I fell into was not converting the date field to a complete date. This may seem obvious to most people but my Google searches came up with nothing so I thought I would create a quick post. I assume everyone knows how to create a decision table so I'm not going to go through those steps. The prerequisite for this post is to have a decision table with a payload that has a date field. This filed must have the date in the following format YYYY-MM-DDThh:mm:ss. Create a new condition in your decision table Right-click on the condition to edit it and select the expression builder In the expression builder, select the Functions tab. Expand the CurrentDate file and select date, and click Insert Into Expression button. In the Expression Builder you need to create an expression that will return true or false, add the operation <= after the CurrentDate.date In my scenario my date field is memberExpire, Navigate to your date field and expand, select toGregorianCalendar(). Your expression will look something like this, click OK to get back to the decision table Now its just a matter of checking if the value is true or false. Simple when you know how :-)

    Read the article

  • FrameworkFolders support for WebCenter Portal

    - by Justin Paul-Oracle
    Oracle WebCenter Content Server includes components that provide a hierarchical folder interface, similar to a conventional file system, for organizing and managing some or all of the content in the repository. We used Contribution Folders (folders_g) which is being replaced by the new Folders (FrameworkFolders) component. The newer FrameworkFolders component fixes a number of limitations that folders_g had and adds a ton of new features. If you have played with the WebCenter Content mobile app, you will notice that it uses FrameworkFolders too. WebCenter Portal requires the use of the folders_g component. Given the fact that folders_g and FrameworkFolders component do not go well together and must never be enabled together on any system, you could never use the new features if you planned to use integrate Content with Portal. I still remember presenting a demo with a colleague for a client and he was showing off the mobile capabilities in Portal (which are pretty impressive by the way) and was getting all the glory; while I was sitting back wanting to show off the Content app but could not (no FrameworkFolders). Not any more; Oracle has released bundle patches for both WebCenter Content and Portal in April 2014 which bring home the support for FrameworkFolders. These patches will bring these products to version 11.1.1.8.3. You can enable support for FrameworkFolders on new Content and Portal installations where the folders_g component has never been enabled using the following patches: Download and apply the WebCenter Content MLR03 patch 18088049. Download and apply the WebCenter Portal BP3 patch 18085041. Download and apply the WebCenterConfigure component patch 18387955. Before applying this patch on the Content Server ensure that WebCenter Content MLR03 patch 18088049 has already been applied. You can find detailed instructions on how to enable FrameworkFolders support in the Oracle® Fusion Middleware Installation Guide for Oracle WebCenter Portal 11g Release 1 (11.1.1.8.3). There may be another bundle patches release (version 11.1.1.8.4) for many FMW products (including WebCenter Portal) in July 2014 which could further enhance the FrameworkFolders support.

    Read the article

  • How to rebuild a Li Ion laptop battery?

    - by spoulson
    I have an aging Gateway NX560XL laptop. The battery is toast and a new one, even aftermarket, starts at $130. So, to experiment, I began tearing apart the old battery to see what can be done. I found it used 8 standard size 18650 Li Ion cells arranged two cells parallel then in series (like: ====). Some online shopping revealed ~$7-13/ea replacements depending on mAh output. My plan is to load test to determine the bad cells and replace only those, as I read that typically only 1 or 2 may be bad. I'm proficient with soldering, however these cells are attached with welded tabs. Some of them broke during disassembly and I'm not sure how to reattach them. What I found online are cells like these that have solder tabs pre-welded to the ends so I can solder wires onto. Is there any guide available that provides the instructions and parts to do this kind of rebuild?

    Read the article

  • How to Install Windows Media Player 11 on Wine

    - by namkid
    I am battling to install Windows Media Player 11 on Wine. I have tried the following: Open a Terminal (Applications, Accessories, Terminal) and type "sudo apt-get install wine." This installs Wine Windows Emulator, a free application that allows you to run many Windows programs within Linux. Download Windows Media Player 11 for Windows XP (link in Resources) and save it to Ubuntu's desktop. Once downloaded, right-click and select "Open with Wine Windows Emulator." Follow the on-screen prompts for installing it to your system. Go to "Applications" then "Wine," select "Programs" and open "Windows Media Player." Click "File" then "Open" and locate a DRM file you want to play. Select "OK" to load it into Media Player. I installed Wine (Which is step 1). But I am having problems with step 2 (Download Windows Media Player 11 for Windows XP (link in Resources) and save it to Ubuntu's desktop). I'm just not finding a way to do it. I may be overlooking the (link in Resources) Can't find it. I am stuck!

    Read the article

  • E.T. Phone "Home" - Hey I've discovered a leak..!

    - by Martin Deh
    Being a member of the WebCenter ATEAM, we are often asked to performance tune a WebCenter custom portal application or a WebCenter Spaces deployment.  Most of the time, the process is pretty much the same.  For example, we often use tools like httpWatch and FireBug to monitor the application, and then perform load tests using JMeter or Selenium.  In addition, there are the fine tuning of the different performance based tuning parameters that are outlined in the documentation and by blogs that have been written by my fellow ATEAMers (click on the "performance" tag in this ATEAM blog).  While performing the load test where the outcome produces a significant reduction in the systems resources (memory), one of the causes that plays a role in memory "leakage" is due to the implementation of the navigation menu UI.  OOTB in both JDeveloper and WebCenter Spaces, there are sample (page) templates that include a "default" navigation menu.  In WebCenter Spaces, this is through the SpacesNavigationModel taskflow region, and in a custom portal (i.e. pageTemplate_globe.jspx) the menu UI is contructed using standard ADF components.  These sample menu UI's basically enable the underlying navigation model to visualize itself to some extent.  However, due to certain limitations of these sample menu implementations (i.e. deeper sub-level of navigations items, look-n-feel, .etc), many customers have developed their own custom navigation menus using a combination of HTML, CSS and JQuery.  While this is supported somewhat by the framework, it is important to know what are some of the best practices in ensuring that the navigation menu does not leak.  In addition, in this blog I will point out a leak (BUG) that is in the sample templates.  OK, E.T. the suspence is killing me, what is this leak? Note: for those who don't know, info on E.T. can be found here In both of the included templates, the example given for handling the navigation back to the "Home" page, will essentially provide a nice little memory leak every time the link is clicked. Let's take a look a simple example, which uses the default template in Spaces. The outlined section below is the "link", which is used to enable a user to navigation back quickly to the Group Space Home page. When you (mouse) hover over the link, the browser displays the target URL. From looking initially at the proposed URL, this is the intended destination.  Note: "home" in this case is the navigation model reference (id), that enables the display of the "pretty URL". Next, notice the current URL, which is displayed in the browser.  Remember, that PortalSiteHome = home.  The other highlighted item adf.ctrl-state, is very important to the framework.  This item is basically a persistent query parameter, which is used by the (ADF) framework to managing the current session and page instance.  Without this parameter present, among other things, the browser back-button navigation will fail.  In this example, the value for this parameter is currently 95K25i7dd_4.  Next, through the navigation menu item, I will click on the Page2 link. Inspecting the URL again, I can see that it reports that indeed the navigation is successful and the adf.ctrl-state is also in the URL.  For those that are wondering why the URL displays Page3.jspx, instead of Page2.jspx. Basically the (file) naming convention for pages created ar runtime in Spaces start at Page1, and then increment as you create additional pages.  The name of the actual link (i.e. Page2) is the page "title" attribute.  So the moral of the story is, unlike design time created pages, run time created pages the name of the file will 99% never match the name that appears in the link. Next, is to click on the quick link for navigating back to the Home page. Quick investigation yields that the navigation was indeed successful.  In the browser's URL there is a home (pretty URL) reference, and there is also a reference to the adf.ctrl-state parameter.  So what's the issue?  Can you remember what the value was for the adf.ctrl-state?  The current value is 3D95k25i7dd_149.  However, the previous value was 95k25i7dd_4.  Here is what happened.  Remember when (mouse) hovering over the link produced the following target URL: http://localhost:8888/webcenter/spaces/NavigationTest/home This is great for the browser as this URL will navigate to the intended targer.  However, what is missing is the adf.ctrl-state parameter.  Since this parameter was not present upon navigation "within" the framework, the ADF framework produced another adf.ctrl-state (object).  The previous adf.ctrl-state basically is orphaned while continuing to be alive in memory.  Note: the auto-creation of the adf.ctrl state does happen initially when you invoke the Spaces application  for the first time.  The following is the line of code which produced the issue: <af:goLink destination="#{boilerBean.globalLogoURIInSpace} ... Here the boilerBean is responsible for returning the "string" url, which in this case is /spaces/NavigationTest/home. Unfortunately, again what is missing is adf.ctrl-state. Note: there are more than one instance of the goLinks in the sample templates. So E.T. how can I correct this? There are 2 simple fixes.  For the goLink's destination, use the navigation model to return the actually "node" value, then use the goLinkPrettyUrl method to add the current adf.ctrl-state: <af:goLink destination="#{navigationContext.defaultNavigationModel.node['home'].goLinkPrettyUrl}"} ... />  Note: the node value is the [navigation model id]  Using a goLink does solve the main issue.  However, since the link basically does a redirect, some browsers like IE will produce a somewhat significant "flash".  In a Spaces application, this may be an annoyance to the users.  Another way to solve the leakage problem, and also remove the flash between navigations is to use a af:commandLink.  For example, here is the code example for this scenario: <af:commandLink id="pt_cl2asf" actionListener="#{navigationContext.processAction}" action="pprnav">    <f:attribute name="node" value="#{navigationContext.defaultNavigationModel.node['home']}"/> </af:commandLink> Here, the navigation node to where home is located is delivered by way of the attribute to the commandLink.  The actual navigation is performed by the processAction, which is needing the "node" value. E.T. OK, you solved the OOTB sample BUG, what about my custom navigation code? I have seen many implementations of creating a navigation menu through custom code.  In addition, there are some blog sites that also give detailed examples.  The majority of these implementations are very similar.  The code usually involves using standard HTML tags (i.e. DIVS, UL, LI, .,etc) and either CSS or JavaScript (JQuery) to produce the flyout/drop-down effect.  The navigation links in these cases are standard <a href... > tags.  Although, this type of approach is not fully accepted by the ADF community, it does work.  The important thing to note here is that the <a> tag value must use the goLinkPrettyURL method of contructing the target URL.  For example: <a href="${contextRoot}${menu.goLinkPrettyUrl}"> The main reason why this type of approach is popular is that links that are created this way (also with using af:goLinks), the pages become crawlable by search engines.  CommandLinks are currently not search friendly.  However, in the case of a Spaces instance this may be acceptable.  So in this use-case, af:commandLinks, which would replace the <a>  (or goLink) tags. The example code given of the af:commandLink above is still valid. One last important item.  If you choose to use af:commandLinks, special attention must be given to the scenario in which java script has been used to produce the flyout effect in the custom menu UI.  In many cases that I have seen, the commandLink can only be invoked once, since there is a conflict between the custom java script with the ADF frameworks own scripting to control the view.  The recommendation here, would be to use a pure CSS approach to acheive the dropdown effects. One very important thing to note.  Due to another BUG, the WebCenter environement must be patched to BP3 (patch  p14076906).  Otherwise the leak is still present using the goLinkPrettyUrl method.  Thanks E.T.!  Now I can phone home and not worry about my application running out of resources due to my custom navigation! 

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • Fair dice over network w/o trusted 3rd party

    - by Kay
    Though it should be a pretty basic problem, I did not find a solution for it: How to play dice over a network without a trusted third party? The M players shall roll N dice, one player after another. No player may "cheat", i.e. change the outcome to his advantage, or "look into the future" before the next roll. Is that possible? I guess the solution would be something like public key crypto, where each player turns in an encrypted message. After all messages were collected you exchange the keys to decode the messages. Then the sha1(joined string of all decrypted messages) mod 6 + 1 is used to determine the die. The major problem I have: since the message [c/s]hould be anything, I don't know how to prevent tampering with the private keys. Esp. the last player to turn in his key could easily cheat (I guess). The game should even stay fair, if all players "conspire" against one player.

    Read the article

  • Change desktop background at school

    - by Nano8Blazex
    On school computers, I can log in with a user account stored on the school network (something like that, I have no experience in networking and this sort of stuff). Everything is fine and dandy and totally works as it should, but there is one thing that I find annoying. Apparently for some reason I can't change my background to anything more than a couple of different solid colors with our school's logo still stuck in the middle. (the original background is a white logo on black background. If I change it to a different color, the central 6x6 inch black/white logo still remains, only the surrounding color is changed.) It may have been set by school administrators or something, I don't really know. I find this really ugly. Is there any way to change a setting so that I can set the background to any picture I wish? (like on a home pc...) Thanks.

    Read the article

  • Socket.io v.9 with Actionscript

    - by funseiki
    I'm attempting to develop an online multiplayer game using Node.js for the server and Flash to display the client. I've been reading up a bit and have found quite a few recommendations for the socket.io library. I've also found a github project which exposes code to help facilitate communication between an Actionscript 3.0 client and a server using socket.io. The project I mentioned is a bit dated and doesn't seem to have support for the latest version of socket.io, so I was wondering if leveraging this framework (socket.io, that is) would be the most ideal way to go. I have found a simple project that uses the standard 'net' module for node.js, but because there a few options available, I'm a little lost as to which one to go with. I'm currently leaning towards just using the regular 'net' module as it is already familiar to me. Since much of the client is already coded up, I'd really like to not switch over to using the HTML5 canvas just yet (but using socket.io would make a transition in the future more friendly, I think?). Any advice/direction on this matter would be much appreciated, though I do realize that there may be no one right answer. Edit: To be more specific, are there any client-side socket.io frameworks available that allow for communication between an Actionscript 3.0 client and a socket.io server and are robust enough to support current/future versions of socket.io? If not, what are the alternatives?

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >