Search Results

Search found 20785 results on 832 pages for 'idea'.

Page 147/832 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • Books or guides regarding secure key storage and database encryption

    - by Matty
    I have an idea for a SaaS product I want to create, however, this product will store extremely sensitive data that needs to be encrypted at rest. The trouble is not so much the encryption, but the problem of securely storing the keys so that in the event the server was somehow compromised, the keys couldn't just be recovered and used to decrypt the database. Are there any decent books to guides regarding database encryption, and in particular secure key storage? This seems to be a less than straightforward topic and something that is difficult to get right. I'm seeing multiple ways to attack such a system, but unable to come up with one that is secure enough to store highly confidential information.

    Read the article

  • Ubuntu 12.10 TTY console (Ctrl+Alt+F[1-6]) not working

    - by Vanessa Deagan
    I've been a Ubuntu user for some time now. I have a very annoying problem, I have no idea what causes it, and I haven't managed to find anything relevant after Googling like crazy. The problem is my TTY consoles are not working. Usually, these are activated using CTRL ALT F[1-6]. It was working when I was using the Nouveau drivers, but after installing the nVidia proprietary drivers, instead of getting a terminal console I get a strange monochrome pattern that slowly fades away. Does anyone know how to get CTRL + ALT + F[1-6] working again?

    Read the article

  • What level/format of access should be given to a client to the issue tracking system?

    - by dukeofgaming
    So, I used to think that it would be a good idea to give the customer access to the issue tracking system, but now I've seen that it creates less than ideal situations, like: Customer judging progress solely on ticket count Developers denied to add issues to avoid customer thinking that there is less progress Customer appointing people on their side to add issues who don't always do a good job (lots of duplicate issues, insufficient information to reproduce, and other things that distract people from doing their real job) However, I think customers should have access to some indicators or proof that there is progress being done, as well as a right to report bugs. So, what would be the ideal solution to this situation?, specially, getting out of or improving the first situation described?

    Read the article

  • Execure a random command from .txt file?

    - by Alberto Burgos
    I have a Ubuntu server, and I'm trying to print a Twitter quote using the app "twidge". So I made a list of tweets on a .txt file. I want to print one tweet (per line) from that file and send it to Twitter via twidge (or what ever other method was possible). I can print a random phrase with shuf: shuf -n 1 /var/www/tweets.txt and it works. It sends me back one of the tweets, but, it does not send it to Twitter, even if the "in line" phrase is a command. i.e: twidge update "bla bla bla" It just prints on the screen, but don't send it to Twitter. I tried turning the .txt to .sh, but don't work... any idea? by the way, i want to use it with crontab, something like this: 15 * * * * shuf -n 1 /var/www/tweets.txt

    Read the article

  • Absolute Top Programming Tips [closed]

    - by Eric
    I'm very intersted in the stuff that REALLY makes a critical difference to career in programming, other than intrinsic stuff like how smart your are, where you were born, etc... Some ideas: 1) Best approach to managing small, medium, and large teams. 2) Most important books to read. 3) Most important skills to know. 4) Correct balance of learning theory vs. just writing code. 5) A good approach to estimating time and cost of a project. 6) Etc... Please limit your answers. If you see somebody has already written your idea, please just vote for their response. I'd like to see what the community thinks are the true indicators of a successful career in our field.

    Read the article

  • In centralized version control, is it always good to update often?

    - by janos
    Assuming that: You are in a team developing some software. Your team is using centralized version control in the development process. You are working on a new feature which will surely take several days to complete, and you won't be able to commit before that because it would break the build. Your team members commit something every day that affects some of the files you're working with for your fancy new feature. Since this is centralized version control, you will have to update your local checkout at some point: at least once right before committing the new feature. If you update only once right before your commit, then there might be a lot of conflicts due to the many other changes by your teammates, which could be a world of pain to resolve all at once. Or, you could update often, and even if there are a few conflicts to resolve day by day, it should be easier to do, little by little. Can we say that it is always a good idea to update often?

    Read the article

  • Implementing invisible bones

    - by DeadMG
    I suddenly have the feeling that I have absolutely no idea how to implement invisible objects/bones. Right now, I use hardware instancing to store the world matrix of every bone in a vertex buffer, and then send them all to the pipeline. But when dealing with frustrum culling, or having them set to invisible by my simulation for other reasons, means that some of them will be randomly invisible. Does this mean I effectively need to re-fill the buffer from scratch every frame with only the visible unit's matrices? This seems to me like it would involve a lot of wasted bandwidth.

    Read the article

  • Auto convert odt to pdf

    - by Gautam K
    I am creating a few documents in Libre office and I have to always send them as .pdf. but each and every time I forget to export it as pdf , So is there any way to auto convert the .odt document into pdf every time I save the document ? I have only about 4 docs , I keep making changes on them , So each and every time I make a change and save the odt I need that change to be updated in the corresponding pdf file . Ps : I understand that unoconv can be used to convert via command line but is there a way to automatically do it ? Another Ps : I found out that there is something called inotify and inotify-tools and that can be used to trigger events when a file changes . But I have no idea on how to use it .

    Read the article

  • Structure of a correctly implemented JTable with TableModel and Listeners?

    - by bamboocha
    I am pretty new to Java and its JTables and this is where I am struggling at the moment. I need to create a GUI which shows me results of a sql query like SELECT * FROM tblPeople WHERE name='Doe'. My idea was to create a a JFrame which displays a JTable with all found records. Besides this, I need to also implement some code to handle when a user is double clicking a record or selecting it by using his arrow keys (additional feature: pressing 12(e.g.) should select the 12th record). What is the best way to structure my program (what classes do I need and especially where do I store my logic)? I came up with structuring it the following way: Main.java ("view") SQLConnection.java PeopleTableModel.java (only stores and returns data given by the passed ResultSet, "model" inherits from DefaultTableModel) PeopleTable.java (stores basically all my logic including KeyListener and MouseListener, "controller", inherits from JTable) Are there better ways to achieve my goals? If so, what are they?

    Read the article

  • 3D architecture app for Android or iPhone

    - by Manixate
    I want to make an app for 3D modeling on iPhone/Android. I cannot get the basic idea of how to get started. I have various options such as learning OpenGL ES, UDK or Unity3d but I want to create models(e.g architecture etc) in my app and then render them when user is finished modeling. I do not know if I am able to design models and then render them in the same app with various effects on the iPhone/Android using UDK or Unity3d. (Note: If you find this question unclear please ask, I may have skipped some vital information).

    Read the article

  • Should your client be able to view your project management board?

    - by bizso09
    We're making a bespoke software for our client and use Codebase for our project management. Is it a good idea to let our client view our project management board? The advantages that we thought of are that this would enhance the cooperation between the client and the dev team, following agile practices. He would essentially become part of our team. It would also reduce communication overhead and make sure we're on the same page. The client could track the progression of the system and make suggestions along the way on the user stories. In addition, he could submit bugs or feature requests. The disadvantages that we though of are that some aspects of the board might be too technical to the client. He would suggest changes to the user stories too often and he might view some content that we normally wouldn't want our client to see. For example, when we compromise on technology or functionality, the client might question that and insist on doing things one way or the other.

    Read the article

  • Contract-Popup at Login

    - by Steve
    I want to give my notebook to guests of my little Hotel as an extra service. I love the Ubuntu guest-account and I think that this is the best possible way to help my guests get free internet-access. I found out how to "design" their user-accounts with /etc/skel, but unfortunately I have no clue, how to show them a small introduction to the system and a kind of user-agreement "contract" when they login. I read of xmessage, but this is too minimalistic. I'd like to implement some pictures. Does anyone have any idea of how to make this possible? Would it be possible that the user is logged out automatically if he rejects the user-agreement? Thank you so much in advance, Steve.

    Read the article

  • OOP private method parameters coding style

    - by Jake
    After coding for many years as a solo programmer, I have come to feel that most of the time there are many benefits to write private member functions with all of the used member variables included in the parameter list, especially development stage. This allow me to check at one look what member variables are used and also allow me to supply other values for tests and debugging. Also, a change in code by removing a particular member variable can break many functions. In this case however, the private function remains isolated am I can still call it using other values without fixing the function. Is this a bad idea afterall, especially in a team environment? Is it like redundant or confusing, or are there better ways?

    Read the article

  • Right mix of planning and programing on a new project

    - by WarrenFaith
    I am about to start a new project (a game, but thats unimportant). The basic idea is in my head but not all the details. I don't want to start programming without planning, but I am seriously fighting my urge to just do it. I want some planning before to prevent refactoring the whole app just because a new feature I could think of requires it. On the other hand, I don't want to plan multiple months (spare time) and start that because I have some fear that I will lose my motivation in this time. What I am looking for is a way of combining both without one dominating the other. Should I realize the project in the way of scrum? Should I creating user stories and then realize them? Should I work feature driven? (I have some experience in scrum and the classic "specification to code" way.)

    Read the article

  • Just being hired as a senior developer, never even been a junior developer, what should I expect?

    - by Mark James
    I've been a freelancer and a coder by night for a while, and recently, I've been hired after several levels of interviews in a nice NY company, even though I've some lacks in specific fields. Is this common for companies to hire seniors with less experience? Will they wait some weeks to respect a certain learning curve? I don't know anything about working in a company, so that's why I worry. After one week, I'm still checking and exploring sources, but after one week of work, it seems that some coworkers are considering that I'm slow. I'm good in maths, physics, algorithms, but still I need to learn about all the templates used in this company. Anyone here already received a less-experienced senior member in his team? Is this acceptable? I'm planing on having a meeting with my boss to stop worrying about that. Sounds like a good idea?

    Read the article

  • How would you want to see software intellectual property protected?

    - by glenatron
    Reading answers to this question - and many other discussions of software patents - it seems that most of us as programmers feel that software patents are a bad idea. At the same time we are in the group most likely to lose out if our work is copied or stolen. So what level of Intellectual Property Protection does code and software need? Is copyright sufficient? Are patents necessary? As software is neither a physical object nor simple text, should we be thinking of a third path that falls somewhere between the two? Do we need any protection at all? If you had the facility to set up the law for this, what would you choose?

    Read the article

  • Is it common to prototype in a higher level language?

    - by Mark Canlas
    I'm currently toying with the idea of embarking on a project that far exceeds my current programming ability in a language I have very little real world experience in (C). Would it be valuable to prototype in a higher level language that I'm more familiar with (like Perl/Python/Ruby/C#) just so I can get the overall design going? Ultimately, the final product is performance sensitive, hence the choice of C, but I'm afraid not knowing C well will make me lose the forest for the trees. While searching for similar questions, I noticed one fellow mention that programmers used to prototype in Prolog, then crank it out in assembler.

    Read the article

  • Error running phusion passenger in standalone mode

    - by msidell
    I'm trying to run standalone phusion passenger so that I can run different ruby rvm configurations on the same host. I already have ruby and passenger running fine on this host. I am following the instructions here. When I run standalone passenger the first time, it appears to successfully install nginx. But then when it tries to run, I get this error: [root@clark directra]# passenger start -a 127.0.0.1 -p 3001 -d --user dweb *** ERROR *** Could not start Passenger Nginx core: nginx: [alert] could not open error log file: open() "/tmp/passenger-standalone.16757/logs/error.log" failed (2: No such file or directory) nginx: [alert] Unable to start the Phusion Passenger watchdog (/var/lib/passenger-standalone/3.0.11-x86-ruby1.9.3-linux-gcc4.1.2-1002/support/ agents/PassengerWatchdog): Permission denied (13) (13: Permission denied) Stopping web server... done FWIW, /tmp is writeable. Any idea what's wrong?

    Read the article

  • remastersys created Live DVD hangs in "Choose a picture"

    - by eos2012
    I used remastersys to create a Live DVD. Then, I used the Live DVD for the installation on another computer. The installation hung at the "Choose a picture" session. Both the "Back" and "Continue" buttons were disabled. It seemed like the installation was hung. I had to power-cycle the computer and reinstall from the Live DVD again. After the power-cycle, the installation from the Live DVD went successfully. Any idea why the installation hung at the "Choose a picture" session, and how to fix it without power-cycle the computer? Thanks a lot!

    Read the article

  • I am not speaking at SQL Connections February 2011 meeting in Chicago suburbs

    - by Alexander Kuznetsov
    Usually it is an honor when we get to present to a user group, but not this time, so let me explain. I have no idea how my presentation got briefly mentioned in the invitation which went out today, without my consent. I have never asked or agreed to speak at SQL Connections February 2011 meeting in Chicago suburbs. Yet I apologize for any inconvenience it might have caused. I was going to speak at the meeting of December 2010, which was agreed by email with the person in charge. I had spent some...(read more)

    Read the article

  • How to prevent my screen from either dimming or the screen-lock starting when watching YouTube?

    - by Steven Roose
    My screen brightness used to dim after a few seconds to preserve battery. This is default in Ubuntu 12.04. However when watching video it should not dim. This works correctly when I watch videos using native applications like VLC. With in-browser video, however, the screen is not prevented from dimming. This is very annoying as you have to move your cursor every 10 seconds or so. I used to use Mac OSX where I had the same dimming settings and Flash videos were taken into account correctly. Anyone an idea how you can make YouTube prevent your screen from dimming?

    Read the article

  • SharePoint Saturday LA&ndash;Free Conference

    - by MOSSLover
    There are four really cool national board members for Women in SharePoint, Cathy Dew, Nedra Allmond, Michelle Strah, and and Lori Gowin.  Nedra is running Women in SharePoint West and she just also happens to be helping out with SharePoint Saturday LA.  If you guys had no idea that California also has SharePoint Saturdays then you were wrong.  There is a SharePoint Saturday on April 2nd in the greater Los Angeles Area.  If anyone is interested in the vicinity please visit this site: http://www.sharepointsaturday.org/la/default.aspx. Technorati Tags: SharePoint Saturday,Los Angeles,SharePoint 2010,SharePoint Events

    Read the article

  • Other people's files showing up in rhythmbox

    - by Avery Boyer
    I have my computer connected to a college network, and right now files that belong to other individuals on campus are showing up under Shared in rhythmbox. This is driving me up the wall, I absolutely despise the idea that files are being thrown around on the network and that other people's s*** is showing up on my computer, and that they may be able to see my files as well. This is a very, very serious problem as far as I am concerned and I want to know how I can ensure that I am sharing nothing with the network in the way of files on my computer and that no one else's files are showing up on my computer.

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • Services or Shared Libraries?

    - by Royal
    I work in an environment where we have several different web applications, where each of them have different features but still need to do similar things: authentication, read from common data sources, store common data, etc. Is it better to build the shared functionality into a set of services, to be called by the web apps, or is it better to make a shared library, which the webapps include? The services or libraries would need to access various databases, and it seems like keeping that access in a single place (service) is a good idea. It would also reduce the number of database connections needed. A service would also keep the logic in a single place, but then it could be argued that a shared library can do the same thing. Are there other benefits to be gained from using services over shared libraries?

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >