Search Results

Search found 6030 results on 242 pages for 'exists'.

Page 107/242 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • I know how to program, and how to learn how to program, but how/where do you learn how to make systems properly?

    - by Ryan
    There are many things that need to be considered when making a system, let's take for example a web based system where users log in and interact with each other, creating and editing content. Now I have to think about security, validation (I don't even think I am 100% sure what that entails), "making sure users don't step on each others feet" (term for this?), preventing errors in many cases, making sure database data doesn't become problematic through unexpected... situations? All these things I don't know how or where to learn, is there a book on this kind of stuff? Like I said there seems to be a huge difference between writing code and actually writing the right code, know what I mean? I feel like my current programming work lacks much of what I have described and I can see the problems it causes later, and then the problems are much harder to solve because data exists and people are using it. So can anyone point me to books or resources or the proper subset of programming(?) for this type of learning? PS: feel free to correct my tags, I don't know what I am talking about. Edit: I assume some of the examples I wrote apply to other types of systems too, I just don't know any other good examples because I've been mostly involved in web work.

    Read the article

  • website attack form submission triggering emails related questions

    - by IberoMedia
    We are experiencing website attacks that trigger the submission of a form, and send alert emails. Normal process of form submission is to fill up a couple of text fields, and when the user is redirected, the next page processes $_POST. If $_POST exists, then the email to intended form recipients is triggered. What is happening right now, we are receiving the email of the form submission, three emails at a time with same information. The information per email is the same, but not all of the spam emails contain the same information, each batch of triggered emails has unique information. The form has no captcha, and if possible we would like to keep it this way. The website has worked fine and had no spamming problems until today. We have monitoring software for the website, but whoever is submitting this form over and over is not being recorded by the tracking software WHY IS THIS? IS THE PERSON ACTUALLY VISITING THE WEBSITE? The only suspicious visit tracked was on November 10th, and this record ALSO shows three forms submitted (this is how I identified possible first visit by attacker). Then no incidents until today. WHAT IS THE GOAL of the spam attack? Is the attacker expecting us to respond to the bogus emails? What can they achieve with repeated submission of form Why are three emails triggered in the row? Is this indicative that they may be using a script? This is a PHP website. Is there a way for a client to view the PHP code of a page? Thank you

    Read the article

  • Effortlessly resize images in Orchard 1.7

    - by Bertrand Le Roy
    I’ve written several times about image resizing in .NET, but never in the context of Orchard. With the imminent release of Orchard 1.7, it’s time to correct this. The new version comes with an extensible media pipeline that enables you to define complex image processing workflows that can automatically resize, change formats or apply watermarks. This is not the subject of this post however. What I want to show here is one of the underlying APIs that enable that feature, and that comes in the form of a new shape. Once you have enabled the media processing feature, a new ResizeMediaUrl shape becomes available from your views. All you have to do is feed it a virtual path and size (and, if you need to override defaults, a few other optional parameters), and it will do all the work for you of creating a unique URL for the resized image, and write that image to disk the first time the shape is rendered: <img src="@Display.ResizeMediaUrl(Path: img, Width: 59)"/> Notice how I only specified a maximum width. The height could of course be specified, but in this case will be automatically determined so that the aspect ratio is preserved. The second time the shape is rendered, the shape will notice that the resized file already exists on disk, and it will serve that directly, so caching is handled automatically and the image can be served almost as fast as the original static one, because it is also a static image. Only the URL generation and checking for the file existence takes time. Here is what the generated thumbnails look like on disk: In the case of those product images, the product page will download 12kB worth of images instead of 1.87MB. The full size images will only be downloaded as needed, if the user clicks on one of the thumbnails to get the full-scale. This is an extremely useful tool to use in your themes to easily render images of the exact right size and thus limit your bandwidth consumption. Mobile users will thank you for that.

    Read the article

  • How much knowledge do I need to begin a project in Django

    - by Smock
    I started learning django about a month ago. I have an intermediate C, Java programming experience. I read the first 8 chapters of the django book . Afterwards, I picked up Practical Django Projects by James Bennett and did the first two projects: CMS & Web Blog. Although, I started getting lost when he got to the generic views part. I know that's important but I'm not sure how important that is when trying to implement a project. Anyway, I have a project in mind that I'd like to start; however, I'm nervous as to where to begin. I'm overwhelmed with the number of things that I'd like my project to do but no knowledge or minimal knowledge as to how e.g. how do i implement css and javascript in my project. Moreover, I am aware that some django packages exists to ease development but I don't know if I should use them or not. Anyway, I apologize for my length message. I just want some advice/encouragement. I have a project in mind but do you think I need to read more materials/tutorials or is it smart to just start working on my project based on the minimal knowledge i've gained from those books? Any information that can be provided is much appreciated. I really want to get good at this but I just need some direction.

    Read the article

  • Directory error when trying to create a new user

    - by Tom Brossman
    I added a second user 'shirley' in Settings - User Accounts, and set a password. The account type is Standard. In 11.04, this worked and I logged in and had a functioning desktop for this user. How is this done in 11.10? When I try to log in as this user I have this error: Nautilus could not create the required folder "/home/shirley/.config/nautilus". Before running Nautilus, please create the following folder, or set permissions such that Nautilus can create it. The only option then is to click OK, this dumps me out to full-screen Nautilus, like this: There is no launcher or visible way to start any programs. Print screen doesn't take a screengrab. The desktop is similar to this question but I get no terminal when I press CTRL+ALT+T. I have to press CTRL+ALT+F2 and restart from the terminal to get out of this. This answer is to install gnome-system-tools. Shouldn't I be able to add a second user with the default install? EDIT: I tried the deluser+adduser suggestion, there was no change after trying it. Here is what I got: tom@desktop:~$ sudo deluser shirley [sudo] password for tom: Removing user `shirley' ... Warning: group `shirley' has no more members. Done tom@desktop:~$ sudo adduser shirley Adding user `shirley' ... Adding new group `shirley' (1001) Adding new user shirley' (1001) with groupshirley' The home directory '/home/shirley' already exists. Not copying from `/etc/skel'. Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing user information for shirley Enter the new value, or press ENTER for the default Full Name []: Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [Y/n] y What has gone so wrong with simply adding a second user? Am I the only one having this problem? I'd reinstall if that fixed things, but this is a fresh install only a few days old.

    Read the article

  • JSON Support in Azure

    - by kaleidoscope
    Please find how we call JavaScript Object Notation in cloud applications. As we all know how client script is useful in web applications in terms of performance.           Same we can use JQuery in Asp.net using Cloud  computing which will  asynchronously pull any messages out of the table(cloud storage)  and display them in the     browser by invoking a method on a controller that returns JavaScript Object Notation (JSON) in a well-known shape. Syntax : Suppose we want to write a  JQuery function which return some notification while end user interact with our application so use following syntax : public JsonResult GetMessages() {      if (User.Identity.IsAuthenticated)      {     UserTextNotification[] userToasts =           toastRepository.GetNotifications(User.Identity.Name);          object[] data =          (from UserTextNotification toast in userToasts          select new { title = toast.Title ?? "Notification",          text = toast.MessageText }).ToArray();           return Json(data, JsonRequestBehavior.AllowGet);      }         else            return Json(null); } Above function is used to check authentication and display message if user is not exists in Table. Plateform :   ASP.NET 3.5  MVC 1   Under Visual Studio 2008  . Please find below link for more detail : http://msdn.microsoft.com/en-us/magazine/ee335721.aspx   Chandraprakash, S

    Read the article

  • Windows Azure Horror Story: The Phantom App that Ate my Data

    - by jasont
    Originally posted on: http://geekswithblogs.net/jasont/archive/2013/10/31/windows-azure-horror-story-the-phantom-app-that-ate-my.aspxI’ve posted in the past about how my company, Stackify, makes some great tools that enhance the Windows Azure experience. By just using the Azure Management Portal, you don’t get a lot of insight into what is happening inside your instances, which are just Windows VMs. This morning, I noticed something quite scary, and fittingly enough, it’s Halloween. I deleted a deployment after doing a new deployment and VIP swap. But, after a few minutes, the instance still appeared in my Stackify dashboard. I thought this odd, as we monitor these operations and remove devices that have been deleted. Digging in showed that it wasn’t a problem with Stackify. This instance was still there, and still sending data. You’ll notice though that the “Status” check shows that the instance no longer exists. And it gets scarier. I quickly checked the running services, and sure enough, the Windows Azure Guest Agent (which runs worker roles) was still running: “Surely, this can’t be” I’m thinking at this point. Knowing that this worker role is in our QA environment, it has debug logging turned on. And, using our log file tailing feature, I can see that, yes, indeed, this deleted deployment is still executing code. Pulling messages off queues. Sending alerts. Changing data. Fortunately, our tools give me the ability to manually disable the Windows service that runs this code. In the meantime, I have contacted everyone I know (and support) at Microsoft to address this issue. Needless to say, this is concerning. When you delete a deployment, it needs to actually delete. Not continue to eat your data. As soon as I learn more, I’ll be posting an update.

    Read the article

  • Discovering Your Project

    - by Tim Murphy
    The discovery phase of any project is both exciting and critical to the project’s success.  There are several key points that you need to keep in mind as you navigate this process. The first thing you need to understand is who the players in the project are and what their motivations are for the project.  Leaving out a key stakeholder in the resulting product is one of the easiest ways to doom your project to fail.  The better the quality of the input you have at this early phase the better chance you will have of creating a well accepted deliverable. The next task you should tackle is to gather the goals for the project.  Specifically, what does the company expect to get for the money they are about to layout.  This seems like a common sense task, but you would be surprised how many teams to straight to building the system.  Even if you are following an agile methodology I believe that this is critical. Inventorying the resources that already exists gives you an idea what you are going to have to build and what you can leverage at lower risk.  This list should include documentation, servers, code repositories, databases, languages, security systems and supporting teams.  All of these are “resources” that can effect the cost and delivery schedule of your project. Finally, you need to verify what you have found and documented with the stakeholders and subject matter experts.  Documentation that has not been reviewed is actually a list of assumptions and we all know that assumptions are the mother of all screw ups. If you give the discovery phase of your project the attention that it deserves your project has a much better chance of success. I would love to hear what other people find important for this phase.  Please leave comments on this post so we can share the knowledge. del.icio.us Tags: Project discovery,documentation,business analysis,architecture

    Read the article

  • Decorator not calling the decorated instance - alternative design needed

    - by Daniel Hilgarth
    Assume I have a simple interface for translating text (sample code in C#): public interface ITranslationService { string GetTranslation(string key, CultureInfo targetLanguage); // some other methods... } A first simple implementation of this interface already exists and simply goes to the database for every method call. Assuming a UI that is being translated at start up this results in one database call per control. To improve this, I want to add the following behavior: As soon as a request for one language comes in, fetch all translations from this language and cache them. All translation requests are served from the cache. I thought about implementing this new behavior as a decorator, because all other methods of that interface implemented by the decorater would simple delegate to the decorated instance. However, the implementation of GetTranslation wouldn't use GetTranslation of the decorated instance at all to get all translations of a certain language. It would fire its own query against the database. This breaks the decorator pattern, because every functionality provided by the decorated instance is simply skipped. This becomes a real problem if there are other decorators involved. My understanding is that a Decorator should be additive. In this case however, the decorator is replacing the behavior of the decorated instance. I can't really think of a nice solution for this - how would you solve it? Everything is allowed, even a complete re-design of ITranslationService itself.

    Read the article

  • Dealing with three Windows partitions in dual boot installation

    - by Tim
    For dual-boot installation of Ubuntu after Windows. Quoted from ubuntuguide If a Windows boot partition exists as a second NTFS partition, it should be left alone. If there is a Windows recovery partition also installed, it can also be left alone as long as there are only two NTFS partitions total on the hard drive (i.e. there is no NTFS boot partition as well). If there are a total of 3 NTFS partitions on the hard drive, then the third Windows NTFS partition (the recovery partition) should be removed after creating Recovery CDs from it (see here). In the last case where Windows has three partitions, I was wondering why it says the recovery partition shall be removed? Is it possible to keep the three and create another extended partition with several logical partitions for installing Ubuntu and dual-booting the two OSes? I plan to dual-boot install Ubuntu 10.04 with existing Windows 7. Following is the layout of the current partitions of my hard drive viewed from Windows 7: So must I remove the Lenovo_Recovery (Q:) partition for the same reason you give for the first question? Thanks and regards!

    Read the article

  • File doesn't exist when trying to change permissions following the avasys image scan manual

    - by Howard Graham
    I was finally able to connect to avasys.jp and downloaded and installed iscan_2.28.1-3.ltdl7_amd64.deb iscan-data_1.13.0-1_all.deb. The programs appeared to install correctly. I then ran sane-find-scanner and got back: found USB scanner (vendor=0x04b8, product=0x012d) at libusb:001:003 I then ran lsusb and got back: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 003: ID 04b8:012d Seiko Epson Corp. Perfection V10/V100 (GT-S600/F650) Bus 001 Device 004: ID 03f0:4817 Hewlett-Packard Bus 002 Device 002: ID 093a:2510 Pixart Imaging, Inc. Optical Mouse the avasys image scan manual instructed me to run chmod 0666 /proc/bus/usb/001/003 which returned chmod: cannot access `/proc/bus/usb/001/003': No such file or directory In 12.04, no such directory exists. 12.04 appears to deal with USB in another way. What must I do to get the usb port 001/003 recognized by xsane and sane as the port where the scanner can be located? What must I do to continue installing the scanner?

    Read the article

  • How do I stop sound coming from my speakers with a faulty headphone socket?

    - by Andy
    On my laptop, I have a faulty headphone socket, so when I insert headphones into it, the speakers do not mute. I can confirm that this problem is caused by faulty hardware and not software as when I twist the headphone jack, the speakers come on and off according to the movements. On previous versions of Ubuntu, I worked around this problem by going into alsamixer and disabling "Auto-Mute Mode", and then going into the sound settings and choosing "Analog Headphones". However, on 12.04, no such option exists, rendering my headphones unusable with no way to work around the problem. I momentarily thought I had this problem fixed when I installed PulseAudio Volume Control from the Software Centre. I selected the Output Devices tab, and under "Built-in Audio Analogue Stereo" I selected "Headphones" for the port. However, this almost randomly seems to change back to "Speakers", despite me setting "Auto-Mute Mode" as disabled. Basically, I would like a way to permanently mute the speakers so only the headphones will play sound, without it losing my settings. It is ridiculous that such a simple setting has been taken away to "simplify" the user interface.

    Read the article

  • Algorithm to measure how "diffused" 5,000 pennies are in an economy?

    - by makerofthings7
    Please allow me to use this example/metaphor to describe an algorithm I need. Objects There are 5 thousand pennies. There are 50 cups. There is a tracking history (Passport "stamp" etc) that is associated with each penny as it moves between cups. Definition I'll define a "highly diffused" penny as one that passes through many cups. A "poorly diffused" penny is one that either passes back and forth between 2 cups Question How can I objectively measure the diffusion of a penny as: The number of moves the penny has gone through The number of cups the penny has been in A unit of time (day, week, month) Why am I doing this? I want to detect if a cup is hoarding pennies. Resistance from bad actors Since hoarding is bad, the "bad cup" may simply solicit a partner and simply move pennies between each other. This will reduce the amount of time a coin isn't in transit, and would skew hoarding detection. A solution might be to detect if a cup (or set of cups) are common "partners" with each other, though I'm not sure how to think though this problem. Broad applicability Any assistance would be helpful, since I would think that this algorithm is common to Economics The study of migration patterns of animals, citizens of a country Other natural occurring phenomena ... and probably exists as a term or concept I'm unfamiliar with.

    Read the article

  • Can Mailchimp APIs be used to send templated transaction email triggered by actions on my website?

    - by HenryW
    I am currently playing around with Mailchimp's APIs, but the documentation to me is not very clear. Here is what I actually want: Have the templates I created on Mailchimp, be visible on my own server. Assign each template I made to a specific action (logged in,subscribed, created order, or new password). This is functionality that I already tested with Mandrill, but the template exists on mandrill's account. If option 1 is not possible, can I still make my own template in my own environment, and send that template out over Mailchimp or Mandrill? Should I use Mailchimps services for this or send the email directly from my own server? Curent used function: function tep_mandrill_mail($to_name, $to_email_address, $email_subject, $email_text, $from_email_name, $from_email_address) { if (SEND_EMAILS != 'true') return false; $uri = 'https://mandrillapp.com/api/1.0/messages/send-template.json'; $postString = '{ "key": "xxxxxxxxxxx", "template_name": "sometemplatename", "template_content": [ { "name": "header", "content": "*|HEADERSTUFF|*" }, { "name": "main", "content": "*|CONTENTSTUFF|*" }, { "name": "footer", "content": "*|FOOTERSTUFF|*" } ], "message": { "subject": "'.$email_subject.'", "from_email": "'.$from_email_adress.'", "from_name": "'.$from_email_name.'", "to": [ { "email": "'.$to_email_address.'", "name": "'.$to_name.'" } ], "important": false, "track_opens": true, "merge": true, "merge_vars": [ { "rcpt": "'.$to_email_address.'", "vars": [ { "name": "HEADERSTUFF", "content": "'.$email_subject.'" }, { "name": "CONTENTSTUFF", "content": "'.$email_text.'" }, { "name": "FOOTERSTUFF", "content": "paulvale-foot" } ] } ], "tags": [ "password_forgotten" ] }, "async": false, "ip_pool": "Main Pool" }'; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $uri); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true ); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true ); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, $postString); curl_exec($ch); }

    Read the article

  • Are first-class functions a substitute for the Strategy pattern?

    - by Prog
    The Strategy design pattern is often regarded as a substitute for first-class functions in languages that lack them. So for example say you wanted to pass functionality into an object. In Java you'd have to pass in the object another object which encapsulates the desired behavior. In a language such as Ruby, you'd just pass the functionality itself in the form of an annonymous function. However I was thinking about it and decided that maybe Strategy offers more than a plain annonymous function does. This is because an object can hold state that exists independently of the period when it's method runs. However an annonymous function by itself can only hold state that ceases to exist the moment the function finishes execution. So my question is: when using a language that features first-class functions, would you ever use the Strategy pattern (i.e. encapsulate the functionality you want to pass around in an explicit object), or would you always use an annonymous function? When would you decide to use Strategy when you can use a first-class function?

    Read the article

  • Does the google crawler really guess URL patterns and index pages that were never linked against?

    - by Dominik
    I'm experiencing problems with indexed pages which were (probably) never linked to. Here's the setup: Data-Server: Application with RESTful interface which provides the data Website A: Provides the data of (1) at http://website-a.example.com/?id=RESOURCE_ID Website B: Provides the data of (1) at http://website-b.example.com/?id=OTHER_RESOURCE_ID So the whole, non-private data is stored on (1) and the websites (2) and (3) can fetch and display this data, which is a representation of the data with additional cross-linking between those. In fact, the URL /?id=1 of website-a points to the same resource as /?id=1 of website-b. However, the resource id:1 is useless at website-b. Unfortunately, the google index for website-b now contains several links of resources belonging to website-a and vice versa. I "heard" that the google crawler tries to determine the URL-pattern (which makes sense for deciding which page should go into the index and which not) and furthermore guesses other URLs by trying different values (like "I know that id 1 exists, let's try 2, 3, 4, ..."). Is there any evidence that the google crawler really behaves that way (which I doubt). My guess is that the google crawler submitted a HTML-Form and somehow got links to those unwanted resources. I found some similar posted questions about that, including "Google webmaster central: indexing and posting false pages" [link removed] however, none of those pages give an evidence.

    Read the article

  • Cannot play windows WMA lossless files on Rhythmbox

    - by sr71
    I have installed Ubuntu 10.10 with all its updates (without Windows) on it’s own drive and everything is working fine. I want to play WMA audio files, also mp3 files. The mp3 files play fine. The WMA files do not play. Used "Rhythmbox Music Player" with and without "Ubuntu-restricted" -extras. Still does not play the lossless windows audio files. I am frustrated with searching to play a WMA file ("download this converter"), but one cannot use this until one "deletes this". I have done everything but it still does not play my windows lossless files that I made from all my CD’s. I am looking for a music player that I can use to play mp3’s and WMA lossless music files and automatically put the album cover on and update the info if one exists. Installation should be as simple as possible. Right now I am back to the original virgin Ubuntu 10.10 with all the recent updates. This computer will do nothing but play music (mp3 and WMA) through a stereo system. I also use Internet to update album info for the music. I do not care what bells and whistles the music player program has, as long as it is an easy install and just plays my mp3 and wma lossless music files. Any help would be appreciated.

    Read the article

  • Handling Types for Real and Complex Matrices in a BLAS Wrapper

    - by mga
    I come from a C background and I'm now learning OOP with C++. As an exercise (so please don't just say "this already exists"), I want to implement a wrapper for BLAS that will let the user write matrix algebra in an intuitive way (e.g. similar to MATLAB) e.g.: A = B*C*D.Inverse() + E.Transpose(); My problem is how to go about dealing with real (R) and complex (C) matrices, because of C++'s "curse" of letting you do the same thing in N different ways. I do have a clear idea of what it should look like to the user: s/he should be able to define the two separately, but operations would return a type depending on the types of the operands (R*R = R, C*C = C, R*C = C*R = C). Additionally R can be cast into C and vice versa (just by setting the imaginary parts to 0). I have considered the following options: As a real number is a special case of a complex number, inherit CMatrix from RMatrix. I quickly dismissed this as the two would have to return different types for the same getter function. Inherit RMatrix and CMatrix from Matrix. However, I can't really think of any common code that would go into Matrix (because of the different return types). Templates. Declare Matrix<T> and declare the getter function as T Get(int i, int j), and operator functions as Matrix *(Matrix RHS). Then specialize Matrix<double> and Matrix<complex>, and overload the functions. Then I couldn't really see what I would gain with templates, so why not just define RMatrix and CMatrix separately from each other, and then overload functions as necessary? Although this last option makes sense to me, there's an annoying voice inside my head saying this is not elegant, because the two are clearly related. Perhaps I'm missing an appropriate design pattern? So I guess what I'm looking for is either absolution for doing this, or advice on how to do better.

    Read the article

  • "Viral" license that only blocks legal actions of user and developer against each other

    - by Lukasz Zaroda
    I was thinking a lot about software licensing lately, because I would like to do some coding. I'm not an expert in all those licenses, so I came up with my own idea, and before I will put in on paper, I would like to make sure that I didn't reinvent a wheel, so maybe I would be able to use something that exists. Main idea behind my license is to guarantee freedom of use the software, but not "freedom to" (positive) (e.g. freedom to having source code), but "freedom from" (negative) (strictly from legal actions against you). It would be "viral" copyleft license. You would be able to without fear do everything you want with the software (and binaries e.g. reverse engineering), as long as You will include information about author and/or authors, and all derivative works will be distributed with the same license. I'm not interested in anything that would restrict a freedom of company to do something like "tivoization". I'm just trying to accomplish something that would block any legal actions of user and developer, targeted against each other, with the exception of basic attribution. Does exist something like that?

    Read the article

  • How to handle new domain names?

    - by michael
    I have a new product which I'll call a pen ink reloader. I have a website using my products name, for example, www.inkywink.com which I want to have accessed by searches for keywords such as "pen ink", "pen out of ink" "ink for pens" etc. , since nobody knows that a pen ink reloader exists. I see that its quite difficult to get on front page for these keywords since they have lots of competition. However I notice that the exact phrases I want to rank highly for are available as domains. I purchase "www.penink.com" and "penoutofink.com" which for arguments sake are highly searched and the perfect keywords to get eyes on my money site www.inkywink.com . Two questions: 1. What is my best option to leverage those names so that they appear near top of searches so that I can get traffic to my money site? Do I just have them redirect 301 to inkywink.com or should I create small original content on each with links to my main site? 2. If I just have them redirected to inkywink.com, am I able to use keywords in metatag and headers for each site separately or do they all automatically obtain the same headers and tags as the site to which theyre redirected ? Thanks to anyone who can help as I'm a real newbie to all this.

    Read the article

  • Pitching My Software-Should I Get a Patent? [closed]

    - by Mike
    This Thursday I will be filming for a Canadian TV Show where I will be pitching my software to 5 Canadian Millionaires to invest in. The software is ridiculously basic but it is for sports teams, specifically football and basketball teams. (The show won't air till September) I made this in Adobe Flash using Actionscript. I have been selling the software to sports teams in the USA mostly including University Teams for $20. I have only sold 32 copies ($640), but I don't advertise, I just write articles on the subject. Now my question is this: I have no patent on this software since I have been doing this casually. Is it a bad idea to go onto this TV show and ask them for money for a software patent? Or should I be asking money for marketing/other things? Note: I also have no idea how much I should ask for. I know absolutely nothing about software patents but if it matters no other software exists that does this exact thing (however it could be easily duplicated).

    Read the article

  • What is bondib1 used for on SPARC SuperCluster with InfiniBand, Solaris 11 networking & Oracle RAC?

    - by user12620111
    A co-worker asked the following question about a SPARC SuperCluster InfiniBand network: > on the database nodes the RAC nodes communicate over the cluster_interconnect. This is the > 192.168.10.0 network on bondib0. (according to ./crs/install/crsconfig_params NETWORKS> setting) > What is bondib1 used for? Is it a HA counterpart in case bondib0 dies? This is my response: Summary: bondib1 is currently only being used for outbound cluster interconnect interconnect traffic. Details: bondib0 is the cluster_interconnect $ oifcfg getif            bondeth0  10.129.184.0  global  public bondib0  192.168.10.0  global  cluster_interconnect ipmpapp0  192.168.30.0  global  public bondib0 and bondib1 are on 192.168.10.1 and 192.168.10.2 respectively. # ipadm show-addr | grep bondi bondib0/v4static  static   ok           192.168.10.1/24 bondib1/v4static  static   ok           192.168.10.2/24 Hostnames tied to the IPs are node1-priv1 and node1-priv2  # grep 192.168.10 /etc/hosts 192.168.10.1    node1-priv1.us.oracle.com   node1-priv1 192.168.10.2    node1-priv2.us.oracle.com   node1-priv2 For the 4 node RAC interconnect: Each node has 2 private IP address on the 192.168.10.0 network. Each IP address has an active InfiniBand link and a failover InfiniBand link. Thus, the 4 node RAC interconnect is using a total of 8 IP addresses and 16 InfiniBand links. bondib1 isn't being used for the Virtual IP (VIP): $ srvctl config vip -n node1 VIP exists: /node1-ib-vip/192.168.30.25/192.168.30.0/255.255.255.0/ipmpapp0, hosting node node1 VIP exists: /node1-vip/10.55.184.15/10.55.184.0/255.255.255.0/bondeth0, hosting node node1 bondib1 is on bondib1_0 and fails over to bondib1_1: # ipmpstat -g GROUP       GROUPNAME   STATE     FDT       INTERFACES ipmpapp0    ipmpapp0    ok        --        ipmpapp_0 (ipmpapp_1) bondeth0    bondeth0    degraded  --        net2 [net5] bondib1     bondib1     ok        --        bondib1_0 (bondib1_1) bondib0     bondib0     ok        --        bondib0_0 (bondib0_1) bondib1_0 goes over net24 # dladm show-link | grep bond LINK                CLASS     MTU    STATE    OVER bondib0_0           part      65520  up       net21 bondib0_1           part      65520  up       net22 bondib1_0           part      65520  up       net24 bondib1_1           part      65520  up       net23 net24 is IB Partition FFFF # dladm show-ib LINK         HCAGUID         PORTGUID        PORT STATE  PKEYS net24        21280001A1868A  21280001A1868C  2    up     FFFF net22        21280001CEBBDE  21280001CEBBE0  2    up     FFFF,8503 net23        21280001A1868A  21280001A1868B  1    up     FFFF,8503 net21        21280001CEBBDE  21280001CEBBDF  1    up     FFFF On Express Module 9 port 2: # dladm show-phys -L LINK              DEVICE       LOC net21             ibp4         PCI-EM1/PORT1 net22             ibp5         PCI-EM1/PORT2 net23             ibp6         PCI-EM9/PORT1 net24             ibp7         PCI-EM9/PORT2 Outbound traffic on the 192.168.10.0 network will be multiplexed between bondib0 & bondib1 # netstat -rn Routing Table: IPv4   Destination           Gateway           Flags  Ref     Use     Interface -------------------- -------------------- ----- ----- ---------- --------- 192.168.10.0         192.168.10.2         U        16    6551834 bondib1   192.168.10.0         192.168.10.1         U         9    5708924 bondib0   There is a lot more traffic on bondib0 than bondib1 # /bin/time snoop -I bondib0 -c 100 > /dev/null Using device ipnet/bondib0 (promiscuous mode) 100 packets captured real        4.3 user        0.0 sys         0.0 (100 packets in 4.3 seconds = 23.3 pkts/sec) # /bin/time snoop -I bondib1 -c 100 > /dev/null Using device ipnet/bondib1 (promiscuous mode) 100 packets captured real       13.3 user        0.0 sys         0.0 (100 packets in 13.3 seconds = 7.5 pkts/sec) Half of the packets on bondib0 are outbound (from self). The remaining packet are split evenly, from the other nodes in the cluster. # snoop -I bondib0 -c 100 | awk '{print $1}' | sort | uniq -c Using device ipnet/bondib0 (promiscuous mode) 100 packets captured   49 node1-priv1.us.oracle.com   24 node2-priv1.us.oracle.com   14 node3-priv1.us.oracle.com   13 node4-priv1.us.oracle.com 100% of the packets on bondib1 are outbound (from self), but the headers in the packets indicate that they are from the IP address associated with bondib0: # snoop -I bondib1 -c 100 | awk '{print $1}' | sort | uniq -c Using device ipnet/bondib1 (promiscuous mode) 100 packets captured  100 node1-priv1.us.oracle.com The destination of the bondib1 outbound packets are split evenly, to node3 and node 4. # snoop -I bondib1 -c 100 | awk '{print $3}' | sort | uniq -c Using device ipnet/bondib1 (promiscuous mode) 100 packets captured   51 node3-priv1.us.oracle.com   49 node4-priv1.us.oracle.com Conclusion: bondib1 is currently only being used for outbound cluster interconnect interconnect traffic.

    Read the article

  • A Cost Effective Solution to Securing Retail Data

    - by MichaelM-Oracle
    By Mike Wion, Director, Security Solutions, Oracle Consulting Services As so many noticed last holiday season, data breaches, especially those at major retailers, are now a significant risk that requires advance preparation. The need to secure data at all access points is now driven by an expanding privacy and regulatory environment coupled with an increasingly dangerous world of hackers, insider threats, organized crime, and other groups intent on stealing valuable data. This newly released Oracle whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c outlines a powerful story related to a defense in depth, multi-layered, security model that includes preventive, detective, and administrative controls for data security. At Oracle Consulting Services (OCS), we help to alleviate the fears of massive data breach by providing expert services to assist our clients with the planning and deployment of Oracle’s Database Security solutions. With our deep expertise in Oracle Database Security, Oracle Consulting can help clients protect data with the security solutions they need to succeed with architecture/planning, implementation, and expert services; which, in turn, provide faster adoption and return on investment with Oracle solutions. On June 10th at 10:00AM PST , Larry Ellison will present an exclusive webcast entitled “The Future of Database Begins Soon”. In this webcast, Larry will launch the highly anticipated Oracle Database In-Memory technology that will make it possible to perform true real-time, ad-hoc, analytic queries on your organization’s business data as it exists at that moment and receive the results immediately. Imagine real-time analytics available across your existing Oracle applications! Click here to download the whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c.

    Read the article

  • Joomla Secondary Users

    - by Gaz_Edge
    Background I have a joomla based application. My customers sign up and they register as a user on the site. My customers (primary customers) then have their own space on the site that they can then setup their own customers (secondary customer). Question/Problem The problem I am having is that I need to tag each secondary customer to a primary customer. I thought about just creating a new component and having a separate table that includes all the secondary customers. The problem is that I then lose out on all the authentication, session handling and login/logout that the core joomla _users component offers. I then thought about just having all the users in the core _users table and add the primary customer associated with each secondary customer to a field in a profile plugin. This would work for the most part, but this means that primary customers cannot create a secondary customer with a user name that already exists in the _users table. I didn't think this would be an issue, but several of my primary customers (currently only test users) have been confused by the site telling them a username is not available, since they can only see the names of their own secondary customers. Any ideas on some architectural changes I could make to solve this?

    Read the article

  • Facebook shared links not opening (Redirecting correctly) [on hold]

    - by Hammad
    I have been in a problem and after hours of searching find nothing useful to counter it. I have a website, and that website has RSS feed attached to facebook page. As I post the content to website it also appears on facebook page. But I have people complaining on my page that my posts don't open when they click the link to read the details. Since I am a Chrome user and didn't notice this happening for months. But as I checked it through firefox and Internet Explorer, I found the links shared actually don't open with these browswers. They only work in Chrome, means they are properly redirected in chrome browser and not in firefox and IE. Whenever I click on links of posts on my page through IE or firefox the url does not simply redirect to my website and I get to see nothing as if I am not connecting to Internet. When examining URL I see this: http://www.facebook.com/l.php?u=http%3A%2F%2Fbit.ly%2F112NVO9&h=EAQHDgTUv&s=1 Which shows that facebook is not redirecting the links properly. Moreover I also use link shortening service bit.ly to shorten my shared links. I have checked same problem exists even if I don't shorten my links. And I checked I am not alone, even the tech giant website mashable.com links also don't open in IE and FF from facebook, they only open in Chrome. From mobile phone the links don't open (redirected properly) even in chrome. Can anyone tell me what is the issue? Nothing much is written about it on Internet as no once has faced this problem. P.S: I have checked from different systems, the problem persists.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >