Search Results

Search found 20283 results on 812 pages for 'security context'.

Page 546/812 | < Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >

  • One codebase - lots of hosted services (similar to a basecamp style service) - planning structure

    - by RickM
    We have built a service (PHP Based) for a client, and are now looking to offer it to other clients as a hosted service. For this example, think of it like a hosted forum service, where a client signs up on our site, and is given a subdomain or can use their own domain, and the code picks up the domain, checks it against a 'master' users table, and then loads the content as needed. I'm trying to work out the best way of handling multiple clients. At the moment I can only think of two options that would work: Option 1 - Have 1 set of database tables, but on each table have a column called 'siteid' - this would mean every query has to check the siteid. This would effectively work with just 1 codebase, and 1 database. Option 2 - Have 1 'master' database with all the core stuff such as the client details and their domain. Then when the systen checks the domain, it pulls the clients database details (username/password/dbname) from a table, and loads a second database. The issue here is security of the mysql server details, however it does have the benefit that they are running their own database instead of sharing one. Which option would I be better taking here, and why? Ideally I want it to be fairly easy to convert the 'standalone' script to the 'multi-domain' script as we're on a tight deadline.

    Read the article

  • Public JCP EC Meeting - 26 June

    - by heathervc
    The first 2012 public JCP Executive Committee (EC) Teleconference Meeting is scheduled for next Tuesday, 26 June at 8:00 AM Pacific Time (PDT).  This meeting is open to the participation of all (members and non-members).  JCP 2.8 (JSR 348) set the requirement for the JCP to hold two public teleconferences each year for the developer community to meet with the JCP EC.  There will also be a public EC Face to Face Meeting during the 2012 JavaOne Conference; details to follow soon.  The meeting details for Tuesday morning are below.  Please participate! Meeting details Date & Time Tuesday June 26, 2012, 8:00 - 9:00 am PDT Location Teleconference Dial-in +1 (866) 682-4770 Conference code: 627-9803 Security code: 52732 ("JCPEC" on your phone handset) For global access numbers see http://www.intercall.com/oracle/access_numbers.htm Or +1 (408) 774-4073 WebEx Browse for the meeting from https://jcp.webex.com No registration required (enter your name and email address) Password : 52732 Agenda JCP.next status: overview of JSRs 355 and 358 JCP events at JavaOne Annual awards Improving communications between the EC and the community Q&A Note The call will be recorded and the recording published on jcp.org, so those who are unable to join in real-time will still be able to participate.

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-12

    - by Bob Rhubart
    15 Lessons from 15 Years as a Software Architect | Ingo Rammer In this presentation from the GOTO Conference in Copenhagen, Ingo Rammer shares 15 tips regarding people, complexity and technology that he learned doing software architecture for 15 years. Adding a runtime picker to a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena shows how to create an Oracle WebCenter popup to allow users to "select items or do more complex things." Oracle Identity Manager 11g R2 Catalog | Daniel Gralewski Oracle Fusion Middleware A-Team blogger Daniel Gralewski shares a detailed overview of the new Catalog feature, one of the most talked about features in the latest release of Oracle Identity Manager 11g. Cloud API and service designers, stop thinking small | Cloud Computing - InfoWorld "The focus must shift away from fine-grained APIs that provide some type of primitive service, such as pushing data to a block of storage or perhaps making a request to a cloud-rooted database," says InfoWorld's David Linthicum. "To go beyond primitives, you must understand how these services should be used in a much larger architectural context. In other words, you need to understand how businesses will employ these services to form real workplace solutions -- inside and outside the enterprise." Oracle Solaris 8 P2V with Oracle database 10.2 and ASM | Orgad Kimchi Orgad Kimchi's technical post illustrates the migration of "a Solaris 8 physical system, with Oracle database version 10.2.0.5 with ASM file-system located on a SAN storage, into a Solaris 8 branded zone inside a Solaris 10 guest domain on top of a Solaris 11 control domain." Thought for the Day "The hardest single part of building a software system is deciding precisely what to build. " — Fred Brooks Source: SoftwareQuotes.com

    Read the article

  • NMap 6.01

    - by TATWORTH
    NMap 6.01 has been released at http://nmap.org/download.html"Nmap ("Network Mapper") is a free and open source (license) utility for network discovery and security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics. It was designed to rapidly scan large networks, but works fine against single hosts. Nmap runs on all major computer operating systems, and official binary packages are available for Linux, Windows, and Mac OS X. In addition to the classic command-line Nmap executable, the Nmap suite includes an advanced GUI and results viewer (Zenmap), a flexible data transfer, redirection, and debugging tool (Ncat), a utility for comparing scan results (Ndiff), and a packet generation and response analysis tool (Nping)."Home page is at http://nmap.org/  Nmap is free to download and use. You can download the source and compile it yourself if you so require.

    Read the article

  • How to help FGLRX detect a device

    - by user113416
    I have HD 4850 card, Ubuntu 12.10 and installed legacy drivers using makson96 ppa. The issue is, that FGLRX can not detect my device and loads vesa bios. I had the same problem on ubuntu 11.10, 12.04 versions. I want to manually help fglrx find a matching device to load as it shoudld do. It is interesting, why does fglrx search for a device in a PCI:0@1:0:1 Bus? in xorg.cof different bus is indicated: Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection fglrxinfo display: :0.0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: ATI Radeon HD 4800 Series OpenGL version string: 3.3.11653 Compatibility Profile Context Here is a part of my xorg log: [ 3.846] (II) VESA: driver for VESA chipsets: vesa [ 3.846] (II) FBDEV: driver for framebuffer: fbdev [ 3.846] (++) using VT number 7 [ 3.846] (WW) Falling back to old probe method for fglrx [ 3.883] (II) Loading PCS database from /etc/ati/amdpcsdb [ 3.883] (--) Assigning device section with no busID to primary device [ 3.883] (--) Chipset Supported AMD Graphics Processor (0x9442) found [ 3.884] (WW) fglrx: No matching Device section for instance (BusID PCI:0@1:0:1) found [ 3.884] (II) AMD Video driver is running on a device belonging to a group targeted for this release [ 3.884] (II) AMD Video driver is signed [ 3.884] (II) fglrx(0): pEnt->device->identifier=0xb7791d8f [ 3.884] (WW) Falling back to old probe method for vesa [ 3.884] (WW) Falling back to old probe method for fbdev Thanks in advance.

    Read the article

  • Domain Transfer Protection - need advice

    - by Jack
    Hey, I am about to purchase a domain name for a bit of money. I do not personally know the person who I am purchasing the domain name from, we have only chatted via email. The proposed process for the transfer is: The owner of the domain lowest the domain name security and emails me the domain password, I request the transfer After the request, I transfer the money via PayPal When the money has been cleared the current domain name owner confirms the transfer via the link that he receives in that email I wait for it to be transferred. The domain is currently registered with DirectNIC - http://www.directnic.com/ Is this the best practice? Seeing I am paying a bit of money for this domain name, I am worried that after the money has been cleared that I won't see the domain name or hear from the current domain name owner again. Is there a 'domain governing body' which I can report to if this is the case? Is the proposed transfer process the best solution? Any advice would be awesome. Thanks! Jack

    Read the article

  • Facebook App EULA & Restrictions: What can't they do that my web app can?

    - by Adam Tannon
    I have written a nifty little web app (in Java/GWT/JS) and have been experimenting with the idea of making it available through Facebook as a Facebook App as well. After spending some time reading Facebook's developer docs, it seems like I can just create a Facebook App to point at any URL I want and use that as the app/canvas. It accomplishes this via iframes. So, my tentative plan is to just point it towards my (existing) web app so that I don't have to totally re-write it. But then that got me thinking: Facebook must regulate what sorts of things can be done through a Facebook App, vs. what an app can't do. For instance, I can't imagine I can point a Facebook App to point at a URL for a web app that accepts e-commerce payments (that would by-pass Facebook altogether and not allow them to take a cut from the ecom transaction!). Also, I can't imagine that Facebook allows developers to point their Facebook Apps to just any old URL without some sort of a scan, otherwise that would open Facebook up to the horrors of every security threat knownst to humanity. I know for a fact that when you write an iOS native app and put it up on the Apple App Store, that Apple actually scans your source code for violations of their EULA. So my question: does Facebook do the same? If so, what are their terms & conditions for what a Facebook app can/can't do? Suprisingly, I can't find this anywhere!! Thanks in advance!

    Read the article

  • Avoid overwriting all the methods in the child class

    - by Heckel
    The context I am making a game in C++ using SFML. I have a class that controls what is displayed on the screen (manager on the image below). It has a list of all the things to draw like images, text, etc. To be able to store them in one list I created a Drawable class from which all the other drawable class inherit. The image below represents how I would organize each class. Drawable has a virtual method Draw that will be called by the manager. Image and Text overwrite this method. My problem is that I would like Image::draw method to work for Circle, Polygon, etc. since sf::CircleShape and sf::ConvexShape inherit from sf::Shape. I thought of two ways to do that. My first idea would be for Image to have a pointer on sf::Shape, and the subclasses would make it point onto their sf::CircleShape or sf::ConvexShape classes (Like on the image below). In the Polygon constructor I would write something like ptr_shape = &polygon_shape; This doesn't look very elegant because I have two variables that are, in fact, just one. My second idea is to store the sf::CircleShape and sf::ConvexShape inside the ptr_shape like ptr_shape = new sf::ConvexShape(...); and to use a function that is only in ConvexShape I would cast it like so ((sf::ConvexShape*)ptr_shape)->convex_method(); But that doesn't look very elegant either. I am not even sure I am allowed to do that. My question I added details about the whole thing because I thought that maybe my whole architecture was wrong. I would like to know how I could design my program to be safe without overwriting all the Image methods. I apologize if this question has already been asked; I have no idea what to google.

    Read the article

  • Can't install Ubuntu Software Center

    - by byf-ferdy
    I'm running Ubuntu 13.10 32bit with Gnome 3.8 but am missing the Ubuntu Software Center. I tried to install it via terminal: $ sudo apt-get install software-center But that tells me that dependencies are not met The following packages have unmet dependencies: software-center : Depends: gir1.2-webkit-3.0 but it is not going to be installed gir1.2-webkit-3.0 depends on gir1.2-javascriptcoregtk-3.0 of version 1.10.2-0ubuntu2. But that package is only available as version 2.0.4-2~ubuntu13.04. I am missing the Ubuntu Software Center as well as the Update Manager and the packages update-notifyer and ubuntu-release-upgrader-gtk. How can I install the packages with correct dependencies? Edit: Output of apt-cache policy gir1.2-javascriptcoregtk-3.0: gir1.2-javascriptcoregtk-3.0: Installed: 2.0.4-2~ubuntu13.04.1 Candidate: 2.0.4-2~ubuntu13.04.1 Version table: *** 2.0.4-2~ubuntu13.04.1 0 100 /var/lib/dpkg/status 1.10.2-0ubuntu2 0 500 http://de.archive.ubuntu.com/ubuntu/ saucy/main i386 Packages My sources.list: deb http://de.archive.ubuntu.com/ubuntu/ saucy main restricted universe multiverse deb http://de.archive.ubuntu.com/ubuntu/ saucy-security main restricted universe multiverse deb http://de.archive.ubuntu.com/ubuntu/ saucy-updates main restricted universe multiverse deb http://archive.canonical.com/ubuntu saucy partner deb http://extras.ubuntu.com/ubuntu saucy main # spotify deb http://repository.spotify.com stable non-free Spotify I added myself.

    Read the article

  • Any danger in using the Wine workaround in 12.04?

    - by TrailRider
    To run certain Windows programs in WINE you need to this workaround: echo 0|sudo tee /proc/sys/kernel/yama/ptrace_scope According to the support websites, this is due to a bug in the Ubuntu kernel that prevents ptrace and WINE playing well together. Using the above command you set the ptrace to 0 which according the research I've done(don't ask me which websites, I have seem a lot of them), ptrace has to do with the interactions between programs. The 0 setting is more permissive than the 1. I have to assume that there was a good reason Ubuntu wanted the ptrace=1 so this leads me back to the short form of the question. Are there any risks involved in setting ptrace=0. Lower security? problems debugging? any others that I haven't thought of??? P.S. for anybody reading this that wonders what the bug causes, the Windows programs will fail to open at all, in the System Monitor you will see many instances of the program trying to open and then they will eventually all quit and if you run the progam for the terminal you will get an error that tells you that the maximum number of program instances has been reached.

    Read the article

  • How can I draw crisp per-pixel images with OpenGL ES on Android?

    - by Qasim
    I have made many Android applications and games in Java before, however I am very new to OpenGL ES. Using guides online, I have made simple things in OpenGL ES, including a simple triangle and a cube. I would like to make a 2D game with OpenGL ES, but what I've been doing isn't working quite so well, as the images I draw aren't to scale, and no matter what guide I use, the image is always choppy and not the right size (I'm debugging on my Nexus S). How can I draw crisp, HD images to the screen with GL ES? Here is an example of what happens when I try to do it: And the actual image: Here is how my texture is created: //get id int id = -1; gl.glGenTextures(1, texture, 0); id = texture[0]; //get bitmap Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.ball); //parameters gl.glBindTexture(GL10.GL_TEXTURE_2D, id); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_REPLACE); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); //crop image mCropWorkspace[0] = 0; mCropWorkspace[1] = height; mCropWorkspace[2] = width; mCropWorkspace[3] = -height; ((GL11) gl).glTexParameteriv(GL10.GL_TEXTURE_2D, GL11Ext.GL_TEXTURE_CROP_RECT_OES, mCropWorkspace, 0);

    Read the article

  • "Oracle Coherence 3.5" Book - My Humble Review

    - by [email protected]
      After reviewing the book in more detail I say again that it is a great guide for sure. Lots of important concepts that sometimes can be somewhat confusing are deeply reviewed, including all types of caching schemes and backing maps, and the cache topologies with their corresponding performances and very useful "When to use it?" sections. Some functionalities that are very desirable or used a lot are reviewed with examples and best practices of implementation, including: Data affinity Querying Pagination Indexes Aggregations Event processing, listening and triggering Data persistence Security Regarding the networking and architecture topics, Coherence*Extend is exhaustively reviewed, including C++ and .NET clients, with very good tips and examples, even including source codes. Personally, I am also glad to see that the address providers (<address-provider> tag), new feature in Coherence 3.5 which is a way to programmatically provide well-known addresses in order to connect to the cluster, is mentioned on the book, because it provides new functionalities to satisfy some special configuration requirements for example: Provide a way to switch extend nodes in cases of failure Implement custom load balancing algorithms and/or dynamic discovery of TCP/IP connection acceptors Dynamically assign TCP address and port settings when binding to a server socket Another very interesting and useful section is the "Coherent Bank Sample Application", which is a great tutorial, useful to understand how Coherence interacts with third party products establishing a clear integration with them, including the use of non-Oracle products like MS Visual Studio.  

    Read the article

  • Notification framework for object lifecycle

    - by rlandster
    I am looking for an application, framework, or library that would help us with "object life-cycle management". There are many things that are created for users, departments, and services that, all too often, are left unmanaged. Some examples: user accounts groups SSL certificates access rights databases software license provisionings storage list-serve accounts These objects are created and managed by a wide variety of applications and systems. Typically, a user (person) requests (either explicitly or implicitly) one of these objects. A centralized management tool would help us manage such administration chores as: What objects does user X currently own/manage? Move the ownership of object P to user X; move all objects owned by user X (who was just been fired) to user Y. For all objects of type T that have expired be sure the objects have been disabled or deleted by their provider. How many active (expired, about-to-expire) objects of type P are there? Send periodic notifications to all users who own active objects of type P reminding them of what they own. There is a security alert for objects of type P; send a notification to all users who own these types of objects to take a specific remedial action. Delete or disable a set of objects based on expiration (or some other criteria). These objects are directly managed through their own applications (Active Directory, MySql, file systems, etc.) and may even have their own notification systems, but I want to centralize this into an "object management system". The OMS should allow the association with an external identity provider that defines who the users and groups are (e.g., LDAP, Active Directory) creation of objects association of an object to a specific user and/or group association with an expiration date creation of flexible reporting including letting users know what objects they currently own and their expiration dates integration with an external object "provider" via a plug-in We could write something from scratch, but I am hoping there is something already out there that will help, either an entire application or a set of libraries that provide much of what is needed. Any ideas?

    Read the article

  • Gain More From Your Oracle Investments

    - by Oracle OpenWorld Blog Team
    By Yaldah Hakim, Oracle Managed Cloud ServicesOracle Managed Cloud Services enables organizations to leverage their Oracle investments by extending them into the cloud—for greater value, choice, and confidence. At Oracle OpenWorld, Oracle Managed Cloud Services has numerous activities and educational sessions planned so you can explore how your organization will benefit from the power of Oracle software and hardware in the cloud.Here are just a few of the Oracle Managed Cloud Services breakout sessions you can attend Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} : Moving into the Cloud with Oracle Cloud Services Upgrade your Oracle Applications into the Cloud Cloud Services: Security and Compliance in the Cloud  And don’t forget to check out the Oracle Cloud Services Lounge at Moscone West Level 3, where you can schedule one-on-one meetings with the cloud services experts.  Lounge Hours:Monday, October 1: 10:00 a.m. - 6:00 p.m.Tuesday, October 2: 10:00 a.m. - 6:00 p.m.Wednesday, October 3: 10:00 a.m. - 4:00 p.m.Thursday, October 4: 10:00 a.m. - 2:00 p.m. For a schedule of all Managed Cloud Services activities at Oracle OpenWorld, go here.

    Read the article

  • How do I backup my customer's data?

    - by marcamillion
    If you run a SaaS app, or work on one, I would love to hear from you. Where the safety and security of your customer's data is paramount, how do you secure it and back it up? I would love to know your main host (e.g. Heroku, Engine Yard, Rackspace, MediaTemple, etc.) and who you use for your backup. Be as detailed as possible - e.g. a quick overview of your service and the data you store (images for instance), what happens with the images when the user uploads them (e.g. they go to your Linode VPS, and posted to the site for them to see - then they are automatically sent to AWS or wherever, then once a week they are backed up to tape by the managed hosting provider, and you also back them up to your house/office). If you could also give some idea as to what the unit cost (per GB/per user/per month) of storage is - on average, I would really appreciate that. Getting ready to launch my app, and I would love to get some more perspective on the nitty gritty details involved. Thanks!

    Read the article

  • Concurrency pattern of logger in multithreaded application

    - by Dipan Mehta
    The context: We are working on a multi-threaded (Linux-C) application that follows a pipeline model. Each module has a private thread and encapsulated objects which do processing of data; and each stage has a standard form of exchanging data with next unit. The application is free from memory leak and is threadsafe using locks at the point where they exchange data. Total number of threads is about 15- and each thread can have from 1 to 4 objects. Making about 25 - 30 odd objects which all have some critical logging to do. Most discussion I have seen about different levels as in Log4J and it's other translations. The real big questions is about how the overall logging should really happen? One approach is all local logging does fprintf to stderr. The stderr is redirected to some file. This approach is very bad when logs become too big. If all object instantiate their individual loggers - (about 30-40 of them) there will be too many files. And unlike above, one won't have the idea of true order of events. Timestamping is one possibility - but it is still a mess to collate. If there is a single global logger (singleton) pattern - it indirectly blocks so many threads while one is busy putting up logs. This is unacceptable when processing of the threads are heavy. So what should be the ideal way to structure the logging objects? What are some of the best practices in actual large scale applications? I would also love to learn from some of the real designs of large scale applications to get inspirations from!

    Read the article

  • Using visitor pattern with large object hierarchy

    - by T. Fabre
    Context I've been using with a hierarchy of objects (an expression tree) a "pseudo" visitor pattern (pseudo, as in it does not use double dispatch) : public interface MyInterface { void Accept(SomeClass operationClass); } public class MyImpl : MyInterface { public void Accept(SomeClass operationClass) { operationClass.DoSomething(); operationClass.DoSomethingElse(); // ... and so on ... } } This design was, however questionnable, pretty comfortable since the number of implementations of MyInterface is significant (~50 or more) and I didn't need to add extra operations. Each implementation is unique (it's a different expression or operator), and some are composites (ie, operator nodes that will contain other operator/leaf nodes). Traversal is currently performed by calling the Accept operation on the root node of the tree, which in turns calls Accept on each of its child nodes, which in turn... and so on... But the time has come where I need to add a new operation, such as pretty printing : public class MyImpl : MyInterface { // Property does not come from MyInterface public string SomeProperty { get; set; } public void Accept(SomeClass operationClass) { operationClass.DoSomething(); operationClass.DoSomethingElse(); // ... and so on ... } public void Accept(SomePrettyPrinter printer) { printer.PrettyPrint(this.SomeProperty); } } I basically see two options : Keep the same design, adding a new method for my operation to each derived class, at the expense of maintainibility (not an option, IMHO) Use the "true" Visitor pattern, at the expense of extensibility (not an option, as I expect to have more implementations coming along the way...), with about 50+ overloads of the Visit method, each one matching a specific implementation ? Question Would you recommand using the Visitor pattern ? Is there any other pattern that could help solve this issue ?

    Read the article

  • OBIEE 11.1.1.6.5 Bundle Patch released Oct 2012

    - by user554629
    October  2012 OBIEE 11.1.1.6.5 Bundle Patch released Bundle patches are collection of controlled, well tested critical bug fixes for a specific product  which may include security contents and occasionally minor enhancements. These are cumulative in nature meaning the latest bundle patch in a particular series would include the contents of the previous bundle patches released.  A suite bundle patch is an aggregation of multiple product  bundle patches that are part of a product suite. For OBIEE on 11.1.1.6.0, we plan to run a monthly bundle patch cadence. 11.1.1.6.5 bundle patch- available for download from  My Oracle Support . - is cumulative, so it includes everything from previous updates- available for supported platforms ( Windows, Linux, Solaris, AIX, HPUX-IA ) Navigate to https://support.oracle.com and login- Knowledge Base tab  Select a product line [ Business Intelligence ]  Select a Task [ Patching and Maintenance ]  Click Search- Oct 23, 2012, OBIEE 11g: Required and Recommended Patches and Patch Sets, ID 1488475.1- 11.1.1.6.5 Published 19th October 2012 Note: The 11.1.1.6 versions on top of 11.1.1.6.0 are not upgrades, they are opatch fixes.  This is not an upgrade process like from OBIEE 10g to 11g, or from OBIEE 11.1.1.5 to 11.1.1.6.  It is much safer than applying any one-off fixes, which are not regression tested.  You will be more successful using 11.1.1.6.5.  

    Read the article

  • X Error of failed request: BadMatch [migrated]

    - by Andrew Grabko
    I'm trying to execute some "hello world" opengl code: #include <GL/freeglut.h> void displayCall() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); ... Some more code here glutSwapBuffers(); } int main(int argc, char *argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH); glutInitWindowSize(500, 500); glutInitWindowPosition(300, 200); glutInitContextVersion(4, 2); glutInitContextFlags(GLUT_FORWARD_COMPATIBLE); glutCreateWindow("Hello World!"); glutDisplayFunc(displayCall); glutMainLoop(); return 0; } As a result I get: X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 128 (GLX) Minor opcode of failed request: 34 () Serial number of failed request: 39 Current serial number in output stream: 40 Here is the stack trace: fghCreateNewContext() at freeglut_window.c:737 0x7ffff7bbaa81 fgOpenWindow() at freeglut_window.c:878 0x7ffff7bbb2fb fgCreateWindow() at freeglut_structure.c:106 0x7ffff7bb9d86 glutCreateWindow() at freeglut_window.c:1,183 0x7ffff7bbb4f2 main() at AlphaTest.cpp:51 0x4007df Here is the last piece of code, after witch the program crashes: createContextAttribs = (CreateContextAttribsProc) fghGetProcAddress("glXCreateContextAttribsARB" ); if ( createContextAttribs == NULL ) { fgError( "glXCreateContextAttribsARB not found" ); } context = createContextAttribs( dpy, config, share_list, direct, attributes ); "glXCreateContextAttribsARB" address is obtained successfully, but the program crashes on its invocation. If I specify OpenGL version less than 4.2 in "glutInitContextVersion()" program runs without errors. Here is my glxinfo's OpelGL version: OpenGL version string: 4.2.0 NVIDIA 285.05.09 I would be very appreciate any further ideas.

    Read the article

  • The 2012 JAX Innovation Awards

    - by Janice J. Heiss
    A new article, now up on otn/java, titled “The 2012 JAX Innovation Awards” reports on  important Java developments celebrated by the Awards, which were announced in July of 2012. The Awards, given by S&S Media Group, aim to, "Reward those technologies, companies, organizations and individuals that make outstanding contributions to Java." The Awards fall into three categories: Most Innovative Java Technology, Most Innovative Java Company, and Top Java Ambassador. In addition, a finalist who did not win an award receives a Special Jury prize, "in acknowledgement of their unique contribution and positive impact on the Java ecosystem."The winners were: JetBrains for Most Innovative Java Company; Adam Bien as Top Java Ambassador; Restructure 101, created by Headway Software, as Most Innovative Technology; and Charles Nutter, Special Jury award. Each winner received a $2,500 prize. The five finalists in each category were invited to attend the JAX Conference in San Francisco, California. This year's winners each received a $2,500 prize. JetBrains Fellow, Ann Oreshnikova, listed her favorite JetBrains innovations: * Nullability annotations and nullability checker* CamelCase navigation and completion* Continuous Integration in grid (on multiple agents), in TeamCity* IntelliJ Platform and its language support framework* MPS language workbench* Kotlin programming languageWhen asked what currently excites him about Java, Adam Bien, winner of the Java Ambassador Award, expressed enthusiasm over the increasing interest of smaller companies and startups for Java EE. “This is a very good sign,” he said. “Only a few years ago J2EE was mostly used by larger companies -- now it becomes interesting even for one-person shows. Enterprise Java events are also extremely popular. On the Java SE side, I'm really excited about Project Nashorn.”Special Jury Prize Winner, Charles Nutter of Red Hat, remarked that, “JRuby seems to have hit a tipping point this past year, moving from ‘just another Ruby implementation’ to ‘the best Ruby implementation for X,’ where X may be performance, scaling, big data, stability, reliability, security, and a number of other features important for today's applications. Check out the complete article here.

    Read the article

  • How do I mount a CIFS share via FSTAB and give full RW to Guest

    - by Kendor
    I want to create a Public folder that has full RW access. The problem with my configuration is that Windows users have no issues as guests (they can RW and Delete), my Ubuntu client can't do the same. We can only write and read, but not create or delete. Here is the my smb.conf from my server: [global] workgroup = WORKGROUP netbios name = FILESERVER server string = TurnKey FileServer os level = 20 security = user map to guest = Bad Password passdb backend = tdbsam null passwords = yes admin users = root encrypt passwords = true obey pam restrictions = yes pam password change = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . add user script = /usr/sbin/useradd -m '%u' -g users -G users delete user script = /usr/sbin/userdel -r '%u' add group script = /usr/sbin/groupadd '%g' delete group script = /usr/sbin/groupdel '%g' add user to group script = /usr/sbin/usermod -G '%g' '%u' guest account = nobody syslog = 0 log file = /var/log/samba/samba.log max log size = 1000 wins support = yes dns proxy = no socket options = TCP_NODELAY panic action = /usr/share/samba/panic-action %d [homes] comment = Home Directory browseable = no read only = no valid users = %S [storage] create mask = 0777 directory mask = 0777 browseable = yes comment = Public Share writeable = yes public = yes path = /srv/storage The following FSTAB entry doesn't yield full R/W access to the share. //192.168.0.5/storage /media/myname/TK-Public/ cifs rw 0 0 This doesn't work either //192.168.0.5/storage /media/myname/TK-Public/ cifs rw,guest,iocharset=utf8,file_mode=0777,dir_mode=0777,noperm 0 0 Using the following location in Nemo/Nautilus w/o the Share being mounted does work: smb://192.168.0.5/storage/ Extra info. I just noticed that if I copy a file to the share after mounting, my Ubuntu client immediately make "nobody" be the owner, and the group "no group" has read and write, with everyone else as read-only. What am I doing wrong?

    Read the article

  • Additional new material WebLogic Community 2013

    - by JuergenKress
    Load Balancing T3 Initial Context Retrieval for WebLogic using Oracle Traffic Director Demystifying WebLogic and Fusion Middleware Management WebLogic Server- Integrated & Optimized w/ Best of Breed Oracle Offerings to Turbo Charge your Applications Get a Bird’s-Eye View of IT Architecture: IT Strategies from Oracle IT Strategies from Oracle, a complimentary authorized library of guidelines and reference architectures, can help you put together a strong IT architecture that takes into account individual technology components as well as big-picture IT concepts and strategies. Read More. Deploying Oracle Application Development Framework Applications on Oracle Java Cloud Service and Oracle Database Cloud Service With the new Oracle Cloud environment you no longer have to maintain an Oracle WebLogic server or a database server of your own – you can instead use instances hosted on Oracle Cloud. More Oracle Application Development Framework Development with Eclipse Oracle Enterprise Pack for Eclipse now provides even more Oracle Application Development Framework tooling with each release. Check out this new tutorial on Oracle Enterprise Pack for Eclipse 12.1.1.2. Oracle WebLogic Devcast Series Join us for the March 28 Oracle WebLogic Devcast Webcast, “What to Expect from Maven on Oracle WebLogic,” featuring Pyounguk Cho, Oracle’s principal product manager. Learn what developers can expect when utilizing Apache Maven with Oracle WebLogic. Customer Webcasts: WebLogic Devcast Series – Register Leveraging Third-Party Libraries to Create and Deploy Applications to Oracle Cloud Oracle ADF: Tuning Application Module Pools and Connection Pools WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Are long methods always bad?

    - by wobbily_col
    So looking around earlier I noticed some comments about long methods being bad practice. I am not sure I always agree that long methods are bad (and would like opinions from others). For example I have some Django views that do a bit of processing of the objects before sending them to the view, a long method being 350 lines of code. I have my code written so that it deals with the paramaters - sorting / filtering the queryset, then bit by bit does some processing on the objects my query has returned. So the processing is mainly conditional aggregation, that has complex enough rules it can't easily be done in the database, so I have some variables declared outside the main loop then get altered during the loop. varaible_1 = 0 variable_2 = 0 for object in queryset : if object.condition_condition_a and variable_2 > 0 : variable 1+= 1 ..... ... . more conditions to alter the variables return queryset, and context So according to the theory I should factor out all the code into smaller methods, so That I have the view method as being maximum one page long. However having worked on various code bases in the past, I sometimes find it makes the code less readable, when you need to constantly jump from one method to the next figuring out all the parts of it, while keeping the outermost method in your head. I find that having a long method that is well formatted, you can see the logic more easily, as it isn't getting hidden away in inner methods. I could factor out the code into smaller methods, but often there is is an inner loop being used for two or three things, so it would result in more complex code, or methods that don't do one thing but two or three (alternatively I could repeat inner loops for each task, but then there will be a performance hit). So is there a case that long methods are not always bad? Is there always a case for writing methods, when they will only be used in one place?

    Read the article

  • Lead Programmer definition clarification

    - by Junaid
    I am working on PHP and MySQL based web application for more than 5 years now. I started my career from Intern - Jr Developer - Software Developer - Sr. Software Engineer [Team Lead] that's what I am nowadays. I was looking at the link at Wikipedia regarding who is a lead programmer. The link states the following: A lead programmer is a software engineer in charge of one or more software projects. Alternative titles include Development Lead, Technical Lead, Senior Software Engineer, Software Design Engineer Lead (SDE Lead), Software Manager, or Senior Applications Developer. When primarily contributing in a high-level enterprise software design role, the title Software Architect (or similar) is often used. All of these titles can have different meanings depending on the context. My current job responsibilities are more or less like a Development Lead and to some extent near Software Architect because I usually design the core structure of new products and managing 2-3 project simultaneously and in the meantime involved in assisting other teams regarding the structural design of their projects, I am usually on call with clients along with project managers, I code most of the time when my team stuck somewhere / workload / integrating some third party API and etc. Primary reason of this writing is to know if I qualify for a Development Lead Title? in accordance with my above mentioned job descriptions?

    Read the article

  • Embedded Tomcat Cluster

    - by ThreaT
    Can someone please explain with an example how an Embedded Tomcat Cluster works. Would a load balancer be necessary? Since we're using embedded tomcat, how would two separate jar files (each a standalone web application with their own embedded tomcat instance) know where eachother are and let eachother know their status, etc? Here is the code I have so far which is just a regular embedded tomcat without any clustering: import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.File; import java.io.IOException; import java.io.Writer; public class Main { public static void main(String[] args) throws LifecycleException, InterruptedException, ServletException { Tomcat tomcat = new Tomcat(); tomcat.setPort(8080); Context ctx = tomcat.addContext("/", new File(".").getAbsolutePath()); Tomcat.addServlet(ctx, "hello", new HttpServlet() { protected void service(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { Writer w = resp.getWriter(); w.write("Hello, World!"); w.flush(); } }); ctx.addServletMapping("/*", "hello"); tomcat.start(); tomcat.getServer().await(); } } Source: java dzone

    Read the article

< Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >