Search Results

Search found 2693 results on 108 pages for 'keeping up'.

Page 90/108 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • MySQL PHP | "SELECT FROM table" using "alphanumeric"-UUID. Speed vs. Indexed Integer / Indexed Char

    - by dropson
    At the moment, I select rows from 'table01' using: SELECT * FROM table01 WHERE UUID = 'whatever'; The UUID column is a unique index. I know this isn't the fastest way to select data from the database, but the UUID is the only row-identifier that is available to the front-end. Since I have to select by UUID, and not ID, I need to know what of these two options I should go for, if say the table consists of 100'000 rows. What speed differences would I look at, and would the index for the UUID grow to large, and lag the DB? Get the ID before doing the "big" select 1. $id = "SELECT ID FROM table01 WHERE UUID = '{alphanumeric character}'"; 2. $rows = SELECT * FROM table01 WHERE ID = $id; Or keep it the way it is now, using the UUID. 1. SELECT FROM table01 WHERE UUID '{alphanumeric character}'; Side note: All new rows are created by checking if the system generated uniqueid exists before trying to insert a new row. Keeping the column always unique. The "example" table. CREATE TABLE Table01 ( ID int NOT NULL PRIMARY KEY, UUID char(15), name varchar(100), url varchar(255), `date` datetime ) ENGINE = InnoDB; CREATE UNIQUE INDEX UUID ON Table01 (UUID);

    Read the article

  • Rails: How do I validate against this code that I put into the lib/ directory?

    - by randombits
    Having a bit of difficulty finding out the proper way to mix in code that I put into the lib/ directory for Rails 2.3.5. I have several models that require phone validation. I had at least three models that used the same code, so I wanted to keep things DRY and moved it out to the lib/ directory. I used to have code like this in each model: validate :phone_is_valid Then I'd have a phone_is_valid method in the model: protected def phone_is_valid # process a bunch of logic errors.add_to_base("invalid phone") if validation failed end I moved this code out into lib/phones/ and in lib/phones I have lib/phones/phone_validation.rb, and in there I copy pasted the phone_is_valid method. My question is, how do I mix this into all of my models now? And does my validate :phone_is_valid method remain the same or does that change? I want to make sure that the errors.add_to_base method continues to function as it did before while keeping everything DRY. I also created another file in lib/phones/ called lib/phones/phone_normalize.rb. Again, many models need the value input by the user to be normalized. Meaning turn (555) 222-1212 to 5552221212 or something similar. Can I invoke that simply by invoking Phones::Phone_Normalize::normalize_method(number)? I suppose I'm confused on the following: How to use the lib directory for validation How to use the lib directory for commonly shared methods that return values

    Read the article

  • Security considerations processing emails

    - by Timmy O' Tool
    I have process that will be reading emails from an account. The objective of the process is saving to a database those emails with image(s) as attachments. I will be saving sender, subject body and image path (the image will be saved on the process). I will be showing this information on a page so I would like to know all (or most of them :) ) security aspects to cover. I plan to sanitize the subject and body of the email. I can remove most of the tags, probably it would be enough keeping the <p> tag. I'm not sure if I can trust just in a sanitizer. I would like to HTML encode everything except for the <p> tag after sanitize, just in case. Any suggestion? I'm only accepting images as attachment as I said above, any security risk I have to take into account in relation to the attachment? Thanks!

    Read the article

  • Should I write my own forum software?

    - by acidzombie24
    I have already built a site from scratch. It has banning, PM, comments, etc. The PMs and comments are done using markdown (like SO). There are pros and cons for writing my own or using another software. But some cons keeping me from using another forum software is Multiple Logins: One for the site, one for separate forums. Need to Customization code: I'll need to change the toolbar in the forum software so I can access pages on the regular site. Look consistency: It may look drastically different from my site even after applying lots of css changes. Banning and User consistency. Users may be ban on site or on forums but not the other. users may select a different or multiple usernames on the forum instead of being forced to use the same username on both site and forum. Should I write my own forum code or should I use something already written? What are some reasons for or against writing my own and using forum software?

    Read the article

  • how to seamlessly integrate subversion and git?

    - by mattv
    I'm looking for tips on how to seamlessly integrate subversion and git, for deploying web sites by a small team of web developers. We each have our own development versions of our sites on our local machines. We also have dev, staging, and live servers. As our team has grown, we haven't updated our revision control and deployment strategies accordingly. We had all been checking into the trunk of a shared Subversion repository. Both the dev & staging servers ran from a checkout of the trunk, so updating them involved running "svn update" while the live server ran as an export from trunk which required an "svn export" to get the latest code. In either case, we would often update just certain files by updating or exporting just those files or directories. That worked okay when there was just one or two developers. However, a big downside was that we couldn't point to an individual tag that represented what was currently on live at any given time. In keeping with corporate policy, we'd like to continue to use Subversion to store what we're now calling our "production branch," which will be what goes onto staging and live. However, we would like to use Git on our local and development sites. We especially like the idea of easier merges and being able to "cherry pick" updates that need to go live. We had initially planned on using git-svn, but it doesn't seem to work well in a shared environment such as our dev or staging servers. Anyone else doing something like this? What's the best way to make it work? Or are we making it more difficult than it should be?

    Read the article

  • MPI Odd/Even Compare-Split Deadlock

    - by erebel55
    I'm trying to write an MPI version of a program that runs an odd/even compare-split operation on n randomly generated elements. Process 0 should generated the elements and send nlocal of them to the other processes, (keeping the first nlocal for itself). From here, process 0 should print out it's results after running the CompareSplit algorithm. Then, receive the results from the other processes run of the algorithm. Finally, print out the results that it has just received. I have a large chunk of this already done, but I'm getting a deadlock that I can't seem to fix. I would greatly appreciate any hints that people could give me. Here is my code http://pastie.org/3742474 Right now I'm pretty sure that the deadlock is coming from the Send/Recv at lines 134 and 151. I've tried changing the Send to use "tag" instead of myrank for the tag parameter..but when I did that I just keep getting a "MPI_ERR_TAG: invalid tag" for some reason. Obviously I would also run the algorithm within the processors 0 but I took that part out for now, until I figure out what is going wrong. Any help is appreciated.

    Read the article

  • The Purpose of a Service Layer and ASP.NET MVC 2

    - by user332022
    In an effort to understand MVC 2 and attempt to get my company to adopt it as a viable platform for future development, I have been doing a lot of reading lately. Having worked with ASP.NET pretty exclusively for the past few years, I had some catching up to do. Currently, I understand the repository pattern, models, controllers, data annotations, etc. But there is one thing that is keeping me from completely understanding enough to start work on a reference application. The first is the Service Layer Pattern. I have read many blog posts and questions here on Stack Overflow, but I still don't completely understand the purpose of this pattern. I watched the entire video series at MVCCentral on the Golf Tracker Application and also looked at the demo code he posted and it looks to me like the service layer is just another wrapper around the repository pattern that doesn't perform any work at all. I also read this post: http://www.asp.net/Learn/mvc/tutorial-38-cs.aspx and it seemed to somewhat answer my question, however, if you are using data annotations to perform your validation, this seems unnecessary. I have looked for demonstrations, posts, etc. but I can't seem to find anything that simply explains the pattern and gives me compelling evidence to use it. Can someone please provide me with a 2nd grade (ok, maybe 5th grade) reason to use this pattern, what I would lose if I don't, and what I gain if I do?a

    Read the article

  • Am I a dinosaur programmer?

    - by dlb
    I have been a professional programmer for more than 30 years, and have chosen a career path involving hands-on programming. Programming is something that I love, and I take great pride in the fact that I have continued to keep up to date with current technology. Projects on which I have worked include large enterprise projects as well as smaller desktop programs. The problem I am facing is that I do not have any web-based experience other than some web services. Most of the jobs now available have some web component. I have now been out of work for a year and a half, and have been keeping busy by studying technology that will bridge that gap: CSS, Java Script, JQuery, and Ruby on Rails; AJAX is next. Hiring managers give no consideration whatsoever to the studying that I have been doing. I know that I cannot compete at a senior software level, but companies will not hire someone with my experience at a more junior level. Is there any way to break out of this Catch 22?

    Read the article

  • Managing My Database in Source Control

    - by Jason
    As I am working with a new database project (within VS2008), and as I have never developed a database from scratch, I immediately began looking into how to manage a database within source control (in this case, Subversion). I found some information on SO, including this post: Keeping development databases in multiple environments in sync. One of the answers in particular pointed to a number of a links, all of which had good, useful information. I was reading a series of posts by K. Scott Allen which describe how he manages database change. From my reading (and please pardon the noobishness of my question), it seems as though the database itself is never checked into a repository. Rather, scripts that can build the database, along with test data (which is also populated from scripts) is checked into the repository. Ultimately, this means that, when a developer is testing his or her app, these scripts, which are part of the build process, are run. This ensures that the database is up-to-date, but is also run locally from every developer's machine. This makes sense to me (if I am indeed reading that correctly). However, if I am missing something, I would appreciate correction or additional guidance. In addition, another question I wanted to ask - does this also mean that I should NOT check in the mdf or ldf files that are created from Visual Studio? Thanks for any help and additional insight. Always appreciated.

    Read the article

  • java: assigning object reference IDs for custom serialization

    - by Jason S
    For various reasons I have a custom serialization where I am dumping some fairly simple objects to a data file. There are maybe 5-10 classes, and the object graphs that result are acyclic and pretty simple (each serialized object has 1 or 2 references to another that are serialized). For example: class Foo { final private long id; public Foo(long id, /* other stuff */) { ... } } class Bar { final private long id; final private Foo foo; public Bar(long id, Foo foo, /* other stuff */) { ... } } class Baz { final private long id; final private List<Bar> barList; public Baz(long id, List<Bar> barList, /* other stuff */) { ... } } The id field is just for the serialization, so that when I am serializing to a file, I can write objects by keeping a record of which IDs have been serialized so far, then for each object checking whether its child objects have been serialized and writing the ones that haven't, finally writing the object itself by writing its data fields and the IDs corresponding to its child objects. What's puzzling me is how to assign id's. I thought about it, and it seems like there are three cases for assigning an ID: dynamically-created objects -- id is assigned from a counter that increments reading objects from disk -- id is assigned from the number stored in the disk file singleton objects -- object is created prior to any dynamically-created object, to represent a singleton object that is always present. How can I handle these properly? I feel like I'm reinventing the wheel and there must be a well-established technique for handling all the cases.

    Read the article

  • XML to JSON - losing root node

    - by Mike
    I'm using net.sf.json with a Java project and it works great. The conversion of this XML: <?xml version="1.0" encoding="UTF-8"?> <important-data certified="true" processed="true"> <timestamp>232423423423</timestamp> <authors> <author> <firstName>Tim</firstName> <lastName>Leary</lastName> </author> </authors> <title>Flashbacks</title> <shippingWeight>1.4 pounds</shippingWeight> <isbn>978-0874778700</isbn> </important-data> converts to this in JSON: { "@certified": "true", "@processed": "true", "timestamp": "232423423423", "authors": [ { "firstName": "Tim", "lastName": "Leary" }], "title": "Flashbacks", "shippingWeight": "1.4 pounds", "isbn": "978-0874778700" } However, the root tag <important-data> is lost in the conversion. Being new to XML and JSON, I am not sure if this is suppose to be the correct behaviour. If not, is there any way to tell net.sf.json to convert it while keeping the root node property? Thanks.

    Read the article

  • subversion: how to manage tweaked files

    - by punk4funk
    Our group is considering moving to SVN. But, I can't seem to find a way to do the following: I need to make minor tweaks locally to about 20 files in the repository w/o having SVN consider them "changed" and included in the commit. (Changes like communication time-outs and logging levels.) Ideally I would want to merge the tweaked files to newer versions in the repository. (Keeping the tweaked local file up-to-date with committed changes form other users.) I can't imagine we're unique in wanting/needing this. Are there best practices around this type of use case? One thing I'm considering is putting all the tweaked files into a branched "tweaked" working copy. Then merging my tweaked files into my "official" working copy. Then using a script, which compares the "tweaked" and "official" working copies, to update my ignore list. The script would also un-ignore and alert me to any files that had tweaks and other changes that, presumably, needed to be committed to the repository. This seems kinda hacky and I can't imagine there's not a better way.

    Read the article

  • python matrices - list index out of range

    - by user1888493
    I am writing a function, that takes a matrix as input, such as the one below. Then the it returns the matrix' inverse, where all the 1s are changed to 0s and all the 0s changed to 1s, while keeping the diagonal from top left to bottom right 0s. An example input: g1 = [[0, 1, 1, 0], [1, 0, 0, 1], [1, 0, 0, 1], [0, 1, 1, 0]] the function should output this: g1 = [[0, 0, 0, 1], [0, 0, 1, 0], [0, 1, 0, 0], [1, 0, 0, 0]] When I run the program, it raises a list index out of range error. I'm sure this happens, because the loops I have set up are trying to access values that do not exist. But how do I allow an input of unknown row and column size? I only know how to do this with a single list, but a list of lists? Following you see the transforming function, but not the test function that calls it: def inverse_graph(graph): # take in graph # change all zeros to ones and ones to zeros r, c = 0, 0 # row, column equal zero while (graph[r][c] == 0 or graph[r][c] == 1): # while the current row has a value. while (graph[r][c] == 0 or graph[r][c] == 1): # while the current column has a value if (graph[r][c] == 0): graph[r][c] = 1 elif (graph[r][c] == 1): graph[r][c] = 0 c+=1 c=0 r+=1 c=0 r=0 # sets diagonal to zeros while (g1[r][c] == 0 or g1[r][c] == 1): g1[r][c]=0 c+=1 r+=1 return graph

    Read the article

  • GNU Makefile: multiple outputs from single rule + preventing intermediate files from being deleted

    - by makesaurus
    This is sort of a continuation of question from link text. The problem is that there is a rule generating multiple outputs from a single input, and the command is time-consuming so we would prefer to avoid recomputation. Now there is an additional twist, that we want to keep files from being deleted as intermediate files, and rules involve wildcards to allow for parameters. The solution suggested was that we set up the following rule: file-a.out: program file.in ./program file.in file-a.out file-b.out file-c.out file-b.out: file-a.out @ file-c.out: file-b.out @ Then, calling make file-c.out creates both and we avoid issues with running make in parallel with -j switch. All fine so far. The problem is the following. Because the above solution sets up a chain in the DAG, make considers it differently; the files file-a.out and file-b.out are treated as intermediate files, and they by default get deleted as unnecessary as soon as file-c.out is ready. A way of avoiding that was mentioned somewhere here, and consists of adding file-a.out and file-b.out as dependencies of a target .SECONDARY, which keeps them from being deleted. Unfortunately, this does not solve my case because my rules use wildcard patters; specifically, my rules look more like this: file-a-%.out: program file.in ./program $* file.in file-a-$*.out file-b-$*.out file-c-$*.out file-b-%.out: file-a-%.out @ file-c-%.out: file-b-%.out @ so that one can pass a parameter that gets included in the file name, for example by running make file-c-12.out The solution that make documentation suggests is to add these as implicit rules to the list of dependencies of .PRECIOUS, thus keeping these files from being deleted. The solution with .PRECIOUS works, but it also prevents these files from being deleted when a rule fails and files are incomplete. Is there any other way to make this work?

    Read the article

  • C++: Declare static variable in function argument list

    - by MDC
    Is there any way at all in C++ to declare a static variable while passing it to a function? I'm looking to use a macro to expand to the expression passed to the function. The expression needs to declare and initialize a static variable on that particular line (based on the filename and line number using FILE and LINE). int foo(int b) { int c = b + 2; return c; } int main() { int a = 3; a = foo(static int h = 2); //<---- see this! cout << a; return 0; } The problem I'm trying to solve is getting the filename and line number with the FILE and LINE macros provided by the preprocessor, but then creating a lookup table with integer keys leading to the FILE, LINE pairs. For example, the key 89 may map to file foo.cpp, line 20. To get this to work, I'm trying to use local static variables, so that they are initialized only once per line execution. The static variable will be initialized by calling a function that calculates the integer key and adds an entry to the lookup table if it is not there. Right now the program uses a message class to send exception information. I'm writing a macro to wrap this class into a new class: WRAPPER_MACRO(old_class_object) will expand to NewClass(old_class_object, key_value). If I add the static variable declaration as a second line right before this, it should work. The problem is that in most places in the code, the old class object is passed as an argument to a function. So the problem becomes declaring and initializing the static variable somehow with the macro, while keeping the existing function calls.

    Read the article

  • Design patterns for Caching Images in a MVC?

    - by Onema
    Hi, I'm designing an image cache system that will be used in an MVC CMS. The main purpose of the image cacher is to modify images: scale, crop, etc and cache them in the client site. I have created an image cache Model and Mapper that interact with the Database, to keep track of the images and know what kind of actions have been applied to them (scale, crop, etc). In addition to the Model and Mapper I have created a ImageCacher Class that is used by the API to manage the Model and image creation based on arguments passed by the client site, this class creates the images and generates the links to the images for the View. A coworker argued that I need to include the functionality of this last Class inside the Model, as the bulk of the logic should go in the model. I respectfully disagree with him since I feel the model's responsibility is to deal with the information about the images cached at the database level, and the responsibility of the ImageCacher Class is to create the url/image that we will be caching (keeping the single responsibility principle). In addition to this I believe that a model should not have Presentation-related features, like creating or showing images. Does anyone have any insight on this? is there a particular design pattern that would make this division of tasks clear and and the image cacher reusable? Should I add all the logic in the Model? Thank you.

    Read the article

  • Can I create an activity for a particular task without that task coming to the foreground?

    - by Neil Traft
    Here's my use case: The app starts at a login screen. You enter your credentials and hit the "Login" button. Then a progress dialog appears and you wait for some stuff to download. Once the stuff has downloaded, you are taken to a new activity. Exactly which activity you are taken to depends on the server response. Here's my problem: If you go HOME during this login/download process, at some point in the near future your download will complete and will invoke startActivity(). So then the new activity will be pushed to the foreground, rudely interrupting the user. I can't start the activity before I start the download, because, as I mentioned earlier, the activity I start depends on the result of the download. I would obviously not like to interrupt the user like this. One way to solve this is to refrain from calling startActivity() until the user returns to the app. I can do this by keeping track of the LoginActivity's onStop() and onRestart(). But I'm wondering, is there any way to create the activity while it is in the background? That way the user returns to the app and he is ready to go... otherwise he would have to wait for the new activity to be created (which could take some time because the new activity also has to download and display some data). Update: Guess what? I LIED! I could have sworn that starting this activity was causing it to come to the foreground, but I went back to test it again and the problem has magically disappeared. I tested in both 1.6 and 2.0.1 and both OSes were smart enough not to bring a backgrounded task to the front.

    Read the article

  • Python - How to wake up a sleeping process- multiprocessing?

    - by user1162512
    I need to wake up a sleeping process ? The time (t) for which it sleeps is calculated as t = D/S . Now since s is varying, can increase or decrease, I need to increase/decrease the sleeping time as well. The speed is received over a UDP procotol. So, how do I change the sleeping time of a process, keeping in mind the following:- If as per the previous speed `S1`, the time to sleep is `(D/S1)` . Now the speed is changed, it should now sleep for the new time,ie (D/S2). Since, it has already slept for D/S1 time, now it should sleep for D/S2 - D/S1. How would I do it? As of right now, I'm just assuming that the speed will remain constant all throughout the program, hence not notifying the process. But how would I do that according to the above condition? def process2(): p = multiprocessing.current_process() time.sleep(secs1) # send some packet1 via UDP time.sleep(secs2) # send some packet2 via UDP time.sleep(secs3) # send some packet3 via UDP Also, as in threads, 1) threading.activeCount(): Returns the number of thread objects that are active. 2) threading.currentThread(): Returns the number of thread objects in the caller's thread control. 3) threading.enumerate(): Returns a list of all thread objects that are currently active. What are the similar functions for getting activecount, enumerate in multiprocessing?

    Read the article

  • Typical Search, Result and Detail Workflow Staying Within an Android Tab

    - by Justin
    So, I've been banging my head looking for a good solution for a few days and am stuck. I have a search screen (Activity) in a tab, and after the user enters a value and clicks "search" I would like the results to come back in that same tab, and then if an item from the results is selected, to show more detailed results, in that same tab. I have it all working now in separate activities, and even the first step working in a tab, but as soon as I call the activity to process he search results... i.e. startActivity(i); for the results Activity, the results displayed are not in the tab! I am having a very difficult time getting this flow to work all under a tab. Any thoughts on how to make this happen? I keep hearing that Android views should be used instead of activities, but am I then to assume that all the logic I have right now for 3 activity needs to go inside 1 activity and then I need to handle setting the content and state for each of these cases? Plus, won't the history stack not work as pressing the back button will take the user out of the application, instead of taking them from say the search result to the search screen, or the details to the search results, etc. This seems like a mess. Can anyone show a more complex example of tabs or how one might have a simple search, result and detail workflow staying in a tab? I have seen a few questions on this concept of keeping activities "within a tab", but no good resolution. Please help.

    Read the article

  • Mercurial: Class library that will exist for both .NET 3.5 and 4.0?

    - by Lasse V. Karlsen
    I have a rather big class library written in .NET 3.5 that I'd like to upgrade to make available for .NET 4.0 as well. In that process, I will rip out a lot of old junk, and rewrite some code to better take advantage of the new classes and support in .NET 4.0 (like TPL.) The class libraries will thus diverge, but still be similar enough that some bug-fixes can be done to both in the same manner. How should I best organize this class library in Mercurial? I'm using Kiln (fogbugz) if that matters. I'm thinking: Named branches in one repository, can then transplant any bugfixes from one to the other Unnamed branches in one repository, can also transplant, but I think this will look messy Separate repositories, will have to reimplement the bugfixes (or use a non-mercurial-integraded compare tool to help me) What would you do? (any other alternatives that I haven't though of is welcome as well.) Note that the class libraries will diverge pretty heavily in areas, I have some remnants of old collection-type code that does something similar to Linq that I will remove, and some code that uses it that I will rewrite to use the Linq-methods instead. As such, just copying the project files and using #if NET40..#endif sections is not going to work out. Also, the 3.5 version of the class library will not be getting many new features, mostly just critical bug-fixes, so keeping both versions equally "alive" isn't really necessary. Thus, separate copies of all the files are good enough.

    Read the article

  • Layout: how to make image to change its width and height proportionally?

    - by Exterminator13
    I have such layout: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="horizontal"> <TextView android:id="@+id/title" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentLeft="true" android:layout_centerVertical="true" android:layout_toLeftOf="@+id/my_image" android:ellipsize="end" android:singleLine="true" android:text="Some text" android:textAppearance="?android:attr/textAppearanceMedium" /> <ImageView android:id="@+id/my_image" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignTop="@+id/title" android:layout_alignBottom="@+id/title" android:layout_alignParentRight="true" android:layout_centerVertical="true" android:adjustViewBounds="true" android:src="@drawable/my_bitmap_image" /> This layout does almost what I need: it makes image view height the same as text view. The image graphic contents stretched also keeping aspect ratio. But, the width of the image view does not change! As a result, I have a wide gap between text and the image view! As a temporal solution, I override View#onLayout. The question: how to change image width in xml layout? UPDATE: This is a final layout I need (text + a few images). Look at the first image: its width should be exactly the same as scaled image in it with no paddings and margins:

    Read the article

  • Performance difference between functions and pattern matching in Mathematica

    - by Samsdram
    So Mathematica is different from other dialects of lisp because it blurs the lines between functions and macros. In Mathematica if a user wanted to write a mathematical function they would likely use pattern matching like f[x_]:= x*x instead of f=Function[{x},x*x] though both would return the same result when called with f[x]. My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax. So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach? Though part of me wouldn't be surprised if functions were actually transformed into some version of macros to allow features like Listable to be implemented. The reason I care about this question is because of the recent set of questions (1) (2) about trying to catch Mathematica errors in large programs. If most of the computations were defined in terms of Functions, it seems to me that keeping track of the order of evaluation and where the error originated would be easier than trying to catch the error after the input has been rewritten by the successive application of macros/patterns.

    Read the article

  • Why can a public class not inherit from a less visible one?

    - by Dan Tao
    I apologize if this question has been asked before. I've searched SO somewhat and wasn't able to find it. I'm just curious what the rationale behind this design was/is. Obviously I understand that private/internal members of a base type cannot, nor should they, be exposed through a derived public type. But it seems to my naive thinking that the "hidden" parts could easily remain hidden while some base functionality is still shared and a new interface is exposed publicly. I'm thinking of something along these lines: Assembly X internal class InternalClass { protected virtual void DoSomethingProtected() { // Let's say this method provides some useful functionality. // Its visibility is quite limited (only to derived types in // the same assembly), but at least it's there. } } public class PublicClass : InternalClass { public void DoSomethingPublic() { // Now let's say this method is useful enough that this type // should be public. What's keeping us from leveraging the // base functionality laid out in InternalClass's implementation, // without exposing anything that shouldn't be exposed? } } Assembly Y public class OtherPublicClass : PublicClass { // It seems (again, to my naive mind) that this could work. This class // simply wouldn't be able to "see" any of the methods of InternalClass // from AssemblyX directly. But it could still access the public and // protected members of PublicClass that weren't inherited from // InternalClass. Does this make sense? What am I missing? }

    Read the article

  • How important is the website logo on a page?

    - by meo
    I have stopped to insert "img" tags for the logo of the page. Because its not an image that is part of the content, its a design element but its still a information I want to have control over. So I just write the title in a "a" element as display: block, overflow: hidden and I push the text out with some padding. I think thats a good solution for SEO because you are keeping control of how important the logo should be on a page. But now my dilemma is starting. How important is the logo of a page? "A list apart" puts the logo in a h1 element. But is the logo really that important? On article pages you have two H1 elements (the logo and the title of the article) Most of the sites just use a img balbal /a, but I don't like this solution. Because I just want to use img for images that are part of the content... Its kinda philosophical question, I hope you can give me some input or some articles to read about that...

    Read the article

  • Protecting routes with authentication in an AngularJS app

    - by Chris White
    Some of my AngularJS routes are to pages which require the user to be authenticated with my API. In those cases, I'd like the user to be redirected to the login page so they can authenticate. For example, if a guest accesses /account/settings, they should be redirected to the login form. From brainstorming I came up with listening for the $locationChangeStart event and if it's a location which requires authentication then redirect the user to the login form. I can do that simple enough in my applications run() event: .run(['$rootScope', function($rootScope) { $rootScope.$on('$locationChangeStart', function(event) { // Decide if this location required an authenticated user and redirect appropriately }); }]); The next step is keeping a list of all my applications routes that require authentication, so I tried adding them as parameters to my $routeProvider: $routeProvider.when('/account/settings', {templateUrl: '/partials/account/settings.html', controller: 'AccountSettingCtrl', requiresAuthentication: true}); But I don't see any way to get the requiresAuthentication key from within the $locationChangeStart event. Am I overthinking this? I tried to find a way for Angular to do this natively but couldn't find anything.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >